Test Report: KVM_Linux_crio 20991

                    
                      850300a2a1d8334a3437f5af90c59ac17fc542af:2025-06-30:40237
                    
                

Test fail (19/322)

x
+
TestAddons/parallel/Registry (363.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.276714ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
helpers_test.go:329: TestAddons/parallel/Registry: WARNING: pod list for "kube-system" "actual-registry=true" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:384: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:384: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301682 -n addons-301682
addons_test.go:384: TestAddons/parallel/Registry: showing logs for failed pods as of 2025-06-30 14:31:40.927723524 +0000 UTC m=+844.015604332
addons_test.go:384: (dbg) Run:  kubectl --context addons-301682 describe po registry-694bd45846-x8cnn -n kube-system
addons_test.go:384: (dbg) kubectl --context addons-301682 describe po registry-694bd45846-x8cnn -n kube-system:
Name:             registry-694bd45846-x8cnn
Namespace:        kube-system
Priority:         0
Service Account:  default
Node:             addons-301682/192.168.39.227
Start Time:       Mon, 30 Jun 2025 14:19:13 +0000
Labels:           actual-registry=true
addonmanager.kubernetes.io/mode=Reconcile
kubernetes.io/minikube-addons=registry
pod-template-hash=694bd45846
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/registry-694bd45846
Containers:
registry:
Container ID:   
Image:          docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7
Image ID:       
Port:           5000/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
REGISTRY_STORAGE_DELETE_ENABLED:  true
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-25znc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-25znc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                           Age                   From               Message
----     ------                           ----                  ----               -------
Normal   Scheduled                        12m                   default-scheduler  Successfully assigned kube-system/registry-694bd45846-x8cnn to addons-301682
Warning  Failed                           9m21s (x2 over 11m)   kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7": fetching target platform image selected from image index: reading manifest sha256:61349442e9c3dc07fd06ffa6a4b622bc28960952b6b3adafcb58fa268ce92e70 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           6m24s (x4 over 11m)   kubelet            Error: ErrImagePull
Warning  Failed                           6m24s (x2 over 8m2s)  kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7": reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           5m56s (x7 over 11m)   kubelet            Error: ImagePullBackOff
Normal   Pulling                          5m3s (x5 over 12m)    kubelet            Pulling image "docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7"
Warning  FailedToRetrieveImagePullSecret  2m20s (x24 over 12m)  kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
Normal   BackOff                          102s (x22 over 11m)   kubelet            Back-off pulling image "docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7"
addons_test.go:384: (dbg) Run:  kubectl --context addons-301682 logs registry-694bd45846-x8cnn -n kube-system
addons_test.go:384: (dbg) Non-zero exit: kubectl --context addons-301682 logs registry-694bd45846-x8cnn -n kube-system: exit status 1 (74.671489ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "registry" in pod "registry-694bd45846-x8cnn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:384: kubectl --context addons-301682 logs registry-694bd45846-x8cnn -n kube-system: exit status 1
addons_test.go:385: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-301682 -n addons-301682
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 logs -n 25: (1.570602852s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:17 UTC |                     |
	|         | -p download-only-777401              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | -o=json --download-only              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | -p download-only-781147              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | --download-only -p                   | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | binary-mirror-095233                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44619               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-095233              | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| addons  | disable dashboard -p                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| start   | -p addons-301682 --wait=true         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:25 UTC |
	|         | --memory=4096 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=registry-creds              |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | -p addons-301682                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:29 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:18:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:18:18.914659 1558425 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:18:18.914940 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.914950 1558425 out.go:358] Setting ErrFile to fd 2...
	I0630 14:18:18.914954 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.915163 1558425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:18:18.915795 1558425 out.go:352] Setting JSON to false
	I0630 14:18:18.916730 1558425 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":28791,"bootTime":1751264308,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:18:18.916865 1558425 start.go:140] virtualization: kvm guest
	I0630 14:18:18.918804 1558425 out.go:177] * [addons-301682] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:18:18.920591 1558425 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:18:18.920596 1558425 notify.go:220] Checking for updates...
	I0630 14:18:18.923430 1558425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:18:18.924993 1558425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:18:18.926449 1558425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:18.927916 1558425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:18:18.929158 1558425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:18:18.930609 1558425 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:18:18.965828 1558425 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 14:18:18.967229 1558425 start.go:304] selected driver: kvm2
	I0630 14:18:18.967249 1558425 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:18:18.967260 1558425 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:18:18.968055 1558425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.968161 1558425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:18:18.984884 1558425 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:18:18.984967 1558425 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:18:18.985269 1558425 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:18:18.985311 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:18.985360 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:18.985373 1558425 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:18:18.985492 1558425 start.go:347] cluster config:
	{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0630 14:18:18.985616 1558425 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.987784 1558425 out.go:177] * Starting "addons-301682" primary control-plane node in "addons-301682" cluster
	I0630 14:18:18.989175 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:18.989236 1558425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 14:18:18.989252 1558425 cache.go:56] Caching tarball of preloaded images
	I0630 14:18:18.989351 1558425 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 14:18:18.989366 1558425 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 14:18:18.989808 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:18.989840 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json: {Name:mk0b97369f17da476cd2a8393ae45d3ce84c94a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:18.990016 1558425 start.go:360] acquireMachinesLock for addons-301682: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 14:18:18.990075 1558425 start.go:364] duration metric: took 40.808µs to acquireMachinesLock for "addons-301682"
	I0630 14:18:18.990091 1558425 start.go:93] Provisioning new machine with config: &{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:18:18.990156 1558425 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 14:18:18.992039 1558425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0630 14:18:18.992210 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:18:18.992268 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:18:19.009360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0630 14:18:19.009944 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:18:19.010513 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:18:19.010538 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:18:19.010965 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:18:19.011233 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:19.011437 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:19.011652 1558425 start.go:159] libmachine.API.Create for "addons-301682" (driver="kvm2")
	I0630 14:18:19.011686 1558425 client.go:168] LocalClient.Create starting
	I0630 14:18:19.011737 1558425 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 14:18:19.156936 1558425 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 14:18:19.413430 1558425 main.go:141] libmachine: Running pre-create checks...
	I0630 14:18:19.413459 1558425 main.go:141] libmachine: (addons-301682) Calling .PreCreateCheck
	I0630 14:18:19.414009 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:19.414492 1558425 main.go:141] libmachine: Creating machine...
	I0630 14:18:19.414509 1558425 main.go:141] libmachine: (addons-301682) Calling .Create
	I0630 14:18:19.414658 1558425 main.go:141] libmachine: (addons-301682) creating KVM machine...
	I0630 14:18:19.414680 1558425 main.go:141] libmachine: (addons-301682) creating network...
	I0630 14:18:19.416107 1558425 main.go:141] libmachine: (addons-301682) DBG | found existing default KVM network
	I0630 14:18:19.416967 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.416813 1558447 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0630 14:18:19.417027 1558425 main.go:141] libmachine: (addons-301682) DBG | created network xml: 
	I0630 14:18:19.417047 1558425 main.go:141] libmachine: (addons-301682) DBG | <network>
	I0630 14:18:19.417058 1558425 main.go:141] libmachine: (addons-301682) DBG |   <name>mk-addons-301682</name>
	I0630 14:18:19.417065 1558425 main.go:141] libmachine: (addons-301682) DBG |   <dns enable='no'/>
	I0630 14:18:19.417074 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417083 1558425 main.go:141] libmachine: (addons-301682) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 14:18:19.417095 1558425 main.go:141] libmachine: (addons-301682) DBG |     <dhcp>
	I0630 14:18:19.417105 1558425 main.go:141] libmachine: (addons-301682) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 14:18:19.417114 1558425 main.go:141] libmachine: (addons-301682) DBG |     </dhcp>
	I0630 14:18:19.417134 1558425 main.go:141] libmachine: (addons-301682) DBG |   </ip>
	I0630 14:18:19.417161 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417196 1558425 main.go:141] libmachine: (addons-301682) DBG | </network>
	I0630 14:18:19.417211 1558425 main.go:141] libmachine: (addons-301682) DBG | 
	I0630 14:18:19.422966 1558425 main.go:141] libmachine: (addons-301682) DBG | trying to create private KVM network mk-addons-301682 192.168.39.0/24...
	I0630 14:18:19.504039 1558425 main.go:141] libmachine: (addons-301682) DBG | private KVM network mk-addons-301682 192.168.39.0/24 created
	I0630 14:18:19.504091 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.503994 1558447 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.504105 1558425 main.go:141] libmachine: (addons-301682) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.504121 1558425 main.go:141] libmachine: (addons-301682) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:18:19.504170 1558425 main.go:141] libmachine: (addons-301682) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 14:18:19.852642 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.852518 1558447 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa...
	I0630 14:18:19.994685 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994513 1558447 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk...
	I0630 14:18:19.994718 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing magic tar header
	I0630 14:18:19.994732 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing SSH key tar header
	I0630 14:18:19.994739 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994653 1558447 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.994842 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682
	I0630 14:18:19.994876 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 14:18:19.994890 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 (perms=drwx------)
	I0630 14:18:19.994904 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 14:18:19.994914 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 14:18:19.994928 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 14:18:19.994937 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 14:18:19.994950 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 14:18:19.994964 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.994974 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:19.994989 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 14:18:19.994999 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 14:18:19.995008 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins
	I0630 14:18:19.995017 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home
	I0630 14:18:19.995028 1558425 main.go:141] libmachine: (addons-301682) DBG | skipping /home - not owner
	I0630 14:18:19.996388 1558425 main.go:141] libmachine: (addons-301682) define libvirt domain using xml: 
	I0630 14:18:19.996417 1558425 main.go:141] libmachine: (addons-301682) <domain type='kvm'>
	I0630 14:18:19.996424 1558425 main.go:141] libmachine: (addons-301682)   <name>addons-301682</name>
	I0630 14:18:19.996429 1558425 main.go:141] libmachine: (addons-301682)   <memory unit='MiB'>4096</memory>
	I0630 14:18:19.996434 1558425 main.go:141] libmachine: (addons-301682)   <vcpu>2</vcpu>
	I0630 14:18:19.996437 1558425 main.go:141] libmachine: (addons-301682)   <features>
	I0630 14:18:19.996441 1558425 main.go:141] libmachine: (addons-301682)     <acpi/>
	I0630 14:18:19.996445 1558425 main.go:141] libmachine: (addons-301682)     <apic/>
	I0630 14:18:19.996450 1558425 main.go:141] libmachine: (addons-301682)     <pae/>
	I0630 14:18:19.996454 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996496 1558425 main.go:141] libmachine: (addons-301682)   </features>
	I0630 14:18:19.996523 1558425 main.go:141] libmachine: (addons-301682)   <cpu mode='host-passthrough'>
	I0630 14:18:19.996559 1558425 main.go:141] libmachine: (addons-301682)   
	I0630 14:18:19.996579 1558425 main.go:141] libmachine: (addons-301682)   </cpu>
	I0630 14:18:19.996596 1558425 main.go:141] libmachine: (addons-301682)   <os>
	I0630 14:18:19.996607 1558425 main.go:141] libmachine: (addons-301682)     <type>hvm</type>
	I0630 14:18:19.996615 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='cdrom'/>
	I0630 14:18:19.996623 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='hd'/>
	I0630 14:18:19.996628 1558425 main.go:141] libmachine: (addons-301682)     <bootmenu enable='no'/>
	I0630 14:18:19.996634 1558425 main.go:141] libmachine: (addons-301682)   </os>
	I0630 14:18:19.996639 1558425 main.go:141] libmachine: (addons-301682)   <devices>
	I0630 14:18:19.996646 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='cdrom'>
	I0630 14:18:19.996654 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/boot2docker.iso'/>
	I0630 14:18:19.996661 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hdc' bus='scsi'/>
	I0630 14:18:19.996666 1558425 main.go:141] libmachine: (addons-301682)       <readonly/>
	I0630 14:18:19.996672 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996677 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='disk'>
	I0630 14:18:19.996687 1558425 main.go:141] libmachine: (addons-301682)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 14:18:19.996710 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk'/>
	I0630 14:18:19.996729 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hda' bus='virtio'/>
	I0630 14:18:19.996742 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996753 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996766 1558425 main.go:141] libmachine: (addons-301682)       <source network='mk-addons-301682'/>
	I0630 14:18:19.996777 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996786 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996796 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996808 1558425 main.go:141] libmachine: (addons-301682)       <source network='default'/>
	I0630 14:18:19.996821 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996847 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996868 1558425 main.go:141] libmachine: (addons-301682)     <serial type='pty'>
	I0630 14:18:19.996884 1558425 main.go:141] libmachine: (addons-301682)       <target port='0'/>
	I0630 14:18:19.996899 1558425 main.go:141] libmachine: (addons-301682)     </serial>
	I0630 14:18:19.996909 1558425 main.go:141] libmachine: (addons-301682)     <console type='pty'>
	I0630 14:18:19.996918 1558425 main.go:141] libmachine: (addons-301682)       <target type='serial' port='0'/>
	I0630 14:18:19.996928 1558425 main.go:141] libmachine: (addons-301682)     </console>
	I0630 14:18:19.996938 1558425 main.go:141] libmachine: (addons-301682)     <rng model='virtio'>
	I0630 14:18:19.996951 1558425 main.go:141] libmachine: (addons-301682)       <backend model='random'>/dev/random</backend>
	I0630 14:18:19.996962 1558425 main.go:141] libmachine: (addons-301682)     </rng>
	I0630 14:18:19.996969 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996980 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996990 1558425 main.go:141] libmachine: (addons-301682)   </devices>
	I0630 14:18:19.997056 1558425 main.go:141] libmachine: (addons-301682) </domain>
	I0630 14:18:19.997083 1558425 main.go:141] libmachine: (addons-301682) 
	I0630 14:18:20.002436 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:4a:da:84 in network default
	I0630 14:18:20.002966 1558425 main.go:141] libmachine: (addons-301682) starting domain...
	I0630 14:18:20.002981 1558425 main.go:141] libmachine: (addons-301682) ensuring networks are active...
	I0630 14:18:20.002988 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:20.003928 1558425 main.go:141] libmachine: (addons-301682) Ensuring network default is active
	I0630 14:18:20.004377 1558425 main.go:141] libmachine: (addons-301682) Ensuring network mk-addons-301682 is active
	I0630 14:18:20.004924 1558425 main.go:141] libmachine: (addons-301682) getting domain XML...
	I0630 14:18:20.006331 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:21.490289 1558425 main.go:141] libmachine: (addons-301682) waiting for IP...
	I0630 14:18:21.491154 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.491628 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.491677 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.491627 1558447 retry.go:31] will retry after 227.981696ms: waiting for domain to come up
	I0630 14:18:21.721263 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.721780 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.721803 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.721737 1558447 retry.go:31] will retry after 379.046975ms: waiting for domain to come up
	I0630 14:18:22.102468 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.102921 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.102946 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.102870 1558447 retry.go:31] will retry after 342.349164ms: waiting for domain to come up
	I0630 14:18:22.446573 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.446984 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.447028 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.446972 1558447 retry.go:31] will retry after 471.24813ms: waiting for domain to come up
	I0630 14:18:22.920211 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.920789 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.920882 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.920792 1558447 retry.go:31] will retry after 708.674729ms: waiting for domain to come up
	I0630 14:18:23.631552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:23.632140 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:23.632158 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:23.632083 1558447 retry.go:31] will retry after 832.667186ms: waiting for domain to come up
	I0630 14:18:24.466597 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:24.467128 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:24.467188 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:24.467084 1558447 retry.go:31] will retry after 1.046318752s: waiting for domain to come up
	I0630 14:18:25.514952 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:25.515439 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:25.515467 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:25.515417 1558447 retry.go:31] will retry after 1.194063503s: waiting for domain to come up
	I0630 14:18:26.712109 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:26.712668 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:26.712736 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:26.712627 1558447 retry.go:31] will retry after 1.248422127s: waiting for domain to come up
	I0630 14:18:27.962423 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:27.962871 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:27.962904 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:27.962823 1558447 retry.go:31] will retry after 2.035519816s: waiting for domain to come up
	I0630 14:18:29.999626 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:30.000023 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:30.000122 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:30.000029 1558447 retry.go:31] will retry after 2.163487066s: waiting for domain to come up
	I0630 14:18:32.164834 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:32.165260 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:32.165289 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:32.165193 1558447 retry.go:31] will retry after 2.715279658s: waiting for domain to come up
	I0630 14:18:34.882095 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:34.882613 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:34.882651 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:34.882566 1558447 retry.go:31] will retry after 4.101409574s: waiting for domain to come up
	I0630 14:18:38.986670 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:38.987057 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:38.987115 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:38.987021 1558447 retry.go:31] will retry after 4.770477957s: waiting for domain to come up
	I0630 14:18:43.763775 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764289 1558425 main.go:141] libmachine: (addons-301682) found domain IP: 192.168.39.227
	I0630 14:18:43.764317 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has current primary IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764323 1558425 main.go:141] libmachine: (addons-301682) reserving static IP address...
	I0630 14:18:43.764708 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find host DHCP lease matching {name: "addons-301682", mac: "52:54:00:83:16:36", ip: "192.168.39.227"} in network mk-addons-301682
	I0630 14:18:43.852639 1558425 main.go:141] libmachine: (addons-301682) reserved static IP address 192.168.39.227 for domain addons-301682
	I0630 14:18:43.852672 1558425 main.go:141] libmachine: (addons-301682) DBG | Getting to WaitForSSH function...
	I0630 14:18:43.852679 1558425 main.go:141] libmachine: (addons-301682) waiting for SSH...
	I0630 14:18:43.855466 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855863 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.855913 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855970 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH client type: external
	I0630 14:18:43.856034 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa (-rw-------)
	I0630 14:18:43.856089 1558425 main.go:141] libmachine: (addons-301682) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 14:18:43.856119 1558425 main.go:141] libmachine: (addons-301682) DBG | About to run SSH command:
	I0630 14:18:43.856137 1558425 main.go:141] libmachine: (addons-301682) DBG | exit 0
	I0630 14:18:43.981627 1558425 main.go:141] libmachine: (addons-301682) DBG | SSH cmd err, output: <nil>: 
	I0630 14:18:43.981928 1558425 main.go:141] libmachine: (addons-301682) KVM machine creation complete
	I0630 14:18:43.982338 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:43.982966 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983226 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983462 1558425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 14:18:43.983477 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:18:43.984862 1558425 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 14:18:43.984878 1558425 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 14:18:43.984885 1558425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 14:18:43.984892 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:43.987532 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.987932 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.987959 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.988068 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:43.988288 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988434 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988572 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:43.988711 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:43.988940 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:43.988950 1558425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 14:18:44.093060 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.093094 1558425 main.go:141] libmachine: Detecting the provisioner...
	I0630 14:18:44.093103 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.096339 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096697 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.096721 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096934 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.097182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097449 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097610 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.097843 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.098060 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.098080 1558425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 14:18:44.202824 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 14:18:44.202946 1558425 main.go:141] libmachine: found compatible host: buildroot
	I0630 14:18:44.202959 1558425 main.go:141] libmachine: Provisioning with buildroot...
	I0630 14:18:44.202967 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203257 1558425 buildroot.go:166] provisioning hostname "addons-301682"
	I0630 14:18:44.203283 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.206655 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.206965 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.206989 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.207261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.207476 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207654 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207765 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.207928 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.208172 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.208189 1558425 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-301682 && echo "addons-301682" | sudo tee /etc/hostname
	I0630 14:18:44.326076 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-301682
	
	I0630 14:18:44.326120 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.329781 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330236 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.330271 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330493 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.330780 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331000 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331147 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.331319 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.331561 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.331583 1558425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-301682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-301682/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-301682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 14:18:44.442815 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.442853 1558425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 14:18:44.442872 1558425 buildroot.go:174] setting up certificates
	I0630 14:18:44.442886 1558425 provision.go:84] configureAuth start
	I0630 14:18:44.442963 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.443427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:44.446591 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447120 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.447146 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447411 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.449967 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450292 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.450314 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450474 1558425 provision.go:143] copyHostCerts
	I0630 14:18:44.450577 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 14:18:44.450730 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 14:18:44.450832 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 14:18:44.450922 1558425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.addons-301682 san=[127.0.0.1 192.168.39.227 addons-301682 localhost minikube]
	I0630 14:18:44.669777 1558425 provision.go:177] copyRemoteCerts
	I0630 14:18:44.669866 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 14:18:44.669906 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.673124 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673495 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.673530 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673760 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.674080 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.674291 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.674517 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:44.758379 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 14:18:44.788885 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 14:18:44.817666 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 14:18:44.847039 1558425 provision.go:87] duration metric: took 404.122435ms to configureAuth
	I0630 14:18:44.847076 1558425 buildroot.go:189] setting minikube options for container-runtime
	I0630 14:18:44.847582 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:18:44.847720 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.850359 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.850971 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.850998 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.851240 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.851500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851706 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851871 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.852084 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.852306 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.852322 1558425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 14:18:45.094141 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 14:18:45.094172 1558425 main.go:141] libmachine: Checking connection to Docker...
	I0630 14:18:45.094182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetURL
	I0630 14:18:45.095525 1558425 main.go:141] libmachine: (addons-301682) DBG | using libvirt version 6000000
	I0630 14:18:45.097995 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098457 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.098484 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098973 1558425 main.go:141] libmachine: Docker is up and running!
	I0630 14:18:45.098988 1558425 main.go:141] libmachine: Reticulating splines...
	I0630 14:18:45.098996 1558425 client.go:171] duration metric: took 26.087298039s to LocalClient.Create
	I0630 14:18:45.099029 1558425 start.go:167] duration metric: took 26.087375233s to libmachine.API.Create "addons-301682"
	I0630 14:18:45.099043 1558425 start.go:293] postStartSetup for "addons-301682" (driver="kvm2")
	I0630 14:18:45.099058 1558425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 14:18:45.099080 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.099385 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 14:18:45.099417 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.103070 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103476 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.103519 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.103974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.104154 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.104348 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.190062 1558425 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 14:18:45.194479 1558425 info.go:137] Remote host: Buildroot 2025.02
	I0630 14:18:45.194513 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 14:18:45.194584 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 14:18:45.194617 1558425 start.go:296] duration metric: took 95.564885ms for postStartSetup
	I0630 14:18:45.194655 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:45.195269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.198414 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.198916 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.198937 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.199225 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:45.199414 1558425 start.go:128] duration metric: took 26.209245344s to createHost
	I0630 14:18:45.199439 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.202677 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203657 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.203683 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203917 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.204167 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204389 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204594 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.204750 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:45.204952 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:45.204962 1558425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 14:18:45.310482 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751293125.283428942
	
	I0630 14:18:45.310513 1558425 fix.go:216] guest clock: 1751293125.283428942
	I0630 14:18:45.310540 1558425 fix.go:229] Guest: 2025-06-30 14:18:45.283428942 +0000 UTC Remote: 2025-06-30 14:18:45.199427216 +0000 UTC m=+26.326566099 (delta=84.001726ms)
	I0630 14:18:45.310570 1558425 fix.go:200] guest clock delta is within tolerance: 84.001726ms
	I0630 14:18:45.310578 1558425 start.go:83] releasing machines lock for "addons-301682", held for 26.320495243s
	I0630 14:18:45.310656 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.310928 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.313785 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314207 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.314241 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314506 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315123 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315340 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315461 1558425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 14:18:45.315505 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.315646 1558425 ssh_runner.go:195] Run: cat /version.json
	I0630 14:18:45.315683 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.318925 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319155 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319563 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319594 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319617 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319643 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319788 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.319877 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.320031 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320110 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320304 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320317 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320442 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.320501 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.399981 1558425 ssh_runner.go:195] Run: systemctl --version
	I0630 14:18:45.435607 1558425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 14:18:45.595593 1558425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 14:18:45.602291 1558425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 14:18:45.602374 1558425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 14:18:45.622229 1558425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 14:18:45.622263 1558425 start.go:495] detecting cgroup driver to use...
	I0630 14:18:45.622333 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 14:18:45.641226 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 14:18:45.658995 1558425 docker.go:230] disabling cri-docker service (if available) ...
	I0630 14:18:45.659074 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 14:18:45.675308 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 14:18:45.691780 1558425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 14:18:45.844773 1558425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 14:18:46.002067 1558425 docker.go:246] disabling docker service ...
	I0630 14:18:46.002163 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 14:18:46.018486 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 14:18:46.032711 1558425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 14:18:46.215507 1558425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 14:18:46.345437 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 14:18:46.361241 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 14:18:46.382182 1558425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 14:18:46.382265 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.393781 1558425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 14:18:46.393858 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.404879 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.415753 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.427101 1558425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 14:18:46.439585 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.450640 1558425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.469657 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.480995 1558425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 14:18:46.490960 1558425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 14:18:46.491038 1558425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 14:18:46.506162 1558425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 14:18:46.516885 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:46.649290 1558425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 14:18:46.754804 1558425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 14:18:46.754924 1558425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 14:18:46.760277 1558425 start.go:563] Will wait 60s for crictl version
	I0630 14:18:46.760374 1558425 ssh_runner.go:195] Run: which crictl
	I0630 14:18:46.764622 1558425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 14:18:46.806540 1558425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 14:18:46.806668 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.835571 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.870294 1558425 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 14:18:46.871793 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:46.874897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875281 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:46.875316 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875568 1558425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 14:18:46.880040 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:46.893844 1558425 kubeadm.go:875] updating cluster {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301
682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 14:18:46.894040 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:46.894098 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:46.928051 1558425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 14:18:46.928142 1558425 ssh_runner.go:195] Run: which lz4
	I0630 14:18:46.932106 1558425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 14:18:46.936459 1558425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 14:18:46.936498 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 14:18:48.250677 1558425 crio.go:462] duration metric: took 1.318609473s to copy over tarball
	I0630 14:18:48.250794 1558425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 14:18:50.229636 1558425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978807649s)
	I0630 14:18:50.229688 1558425 crio.go:469] duration metric: took 1.978978941s to extract the tarball
	I0630 14:18:50.229696 1558425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 14:18:50.268804 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:50.313787 1558425 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 14:18:50.313824 1558425 cache_images.go:84] Images are preloaded, skipping loading
	I0630 14:18:50.313836 1558425 kubeadm.go:926] updating node { 192.168.39.227 8443 v1.33.2 crio true true} ...
	I0630 14:18:50.313984 1558425 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-301682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 14:18:50.314108 1558425 ssh_runner.go:195] Run: crio config
	I0630 14:18:50.358762 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:50.358788 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:50.358799 1558425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 14:18:50.358821 1558425 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-301682 NodeName:addons-301682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 14:18:50.358985 1558425 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-301682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 14:18:50.359075 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 14:18:50.370269 1558425 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 14:18:50.370359 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 14:18:50.381422 1558425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0630 14:18:50.402864 1558425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 14:18:50.423535 1558425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0630 14:18:50.443802 1558425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0630 14:18:50.448073 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:50.462771 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:50.610565 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:18:50.641674 1558425 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682 for IP: 192.168.39.227
	I0630 14:18:50.641703 1558425 certs.go:194] generating shared ca certs ...
	I0630 14:18:50.641726 1558425 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.641917 1558425 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 14:18:50.775973 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt ...
	I0630 14:18:50.776127 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt: {Name:mk4a7e2f23df1877aa667a5fe9d149d87fa65b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776340 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key ...
	I0630 14:18:50.776353 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key: {Name:mkfe815a12ae8eded146419f42722ed747bb8cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776428 1558425 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 14:18:51.239699 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt ...
	I0630 14:18:51.239736 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt: {Name:mk010f91985630538e2436d654ff5b4cc759ab0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.239913 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key ...
	I0630 14:18:51.239969 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key: {Name:mk7a36f8a28748533897dd07634d8a5fe44a63a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.240059 1558425 certs.go:256] generating profile certs ...
	I0630 14:18:51.240131 1558425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key
	I0630 14:18:51.240150 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt with IP's: []
	I0630 14:18:51.635887 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt ...
	I0630 14:18:51.635927 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: {Name:mk22a67b2c0e90bc5dc67c34e330ee73fa799ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636119 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key ...
	I0630 14:18:51.636131 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key: {Name:mkbf3398b6d7cd5371d9a47d76e04eca4caef4d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636203 1558425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213
	I0630 14:18:51.636222 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I0630 14:18:52.292769 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 ...
	I0630 14:18:52.292809 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213: {Name:mk1402d3ac26fc5001a4011347c3552a378bda20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.292987 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 ...
	I0630 14:18:52.293001 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213: {Name:mkeaa6e21db5ae6cfb6b65c2ca90535340da5144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.293104 1558425 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt
	I0630 14:18:52.293196 1558425 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key
	I0630 14:18:52.293250 1558425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key
	I0630 14:18:52.293270 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt with IP's: []
	I0630 14:18:52.419123 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt ...
	I0630 14:18:52.419160 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt: {Name:mk3dd33047a5c3911a43a99bfac807aefa8e06f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419432 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key ...
	I0630 14:18:52.419460 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key: {Name:mk0d0b95d0dc825fc1e604461553530ed22a222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419680 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 14:18:52.419719 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 14:18:52.419744 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 14:18:52.419768 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 14:18:52.420585 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 14:18:52.463313 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 14:18:52.499004 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 14:18:52.526030 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 14:18:52.553220 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 14:18:52.581783 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 14:18:52.609656 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 14:18:52.639333 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0630 14:18:52.668789 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 14:18:52.696673 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 14:18:52.718151 1558425 ssh_runner.go:195] Run: openssl version
	I0630 14:18:52.724602 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 14:18:52.737181 1558425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742169 1558425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742231 1558425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.749342 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 14:18:52.762744 1558425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 14:18:52.768406 1558425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 14:18:52.768474 1558425 kubeadm.go:392] StartCluster: {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:18:52.768572 1558425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 14:18:52.768641 1558425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 14:18:52.812315 1558425 cri.go:89] found id: ""
	I0630 14:18:52.812437 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 14:18:52.824357 1558425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 14:18:52.837485 1558425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 14:18:52.850688 1558425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 14:18:52.850718 1558425 kubeadm.go:157] found existing configuration files:
	
	I0630 14:18:52.850770 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 14:18:52.862272 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 14:18:52.862353 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 14:18:52.874603 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 14:18:52.885384 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 14:18:52.885470 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 14:18:52.897341 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.908726 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 14:18:52.908791 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.920093 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 14:18:52.930423 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 14:18:52.930535 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 14:18:52.943582 1558425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 14:18:53.101493 1558425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 14:19:04.329808 1558425 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 14:19:04.329898 1558425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 14:19:04.330028 1558425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 14:19:04.330246 1558425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 14:19:04.330383 1558425 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 14:19:04.330478 1558425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 14:19:04.332630 1558425 out.go:235]   - Generating certificates and keys ...
	I0630 14:19:04.332731 1558425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 14:19:04.332810 1558425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 14:19:04.332905 1558425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 14:19:04.332972 1558425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 14:19:04.333024 1558425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 14:19:04.333069 1558425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 14:19:04.333119 1558425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 14:19:04.333250 1558425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333332 1558425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 14:19:04.333509 1558425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333623 1558425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 14:19:04.333739 1558425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 14:19:04.333816 1558425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 14:19:04.333868 1558425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 14:19:04.333909 1558425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 14:19:04.333955 1558425 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 14:19:04.334001 1558425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 14:19:04.334088 1558425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 14:19:04.334155 1558425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 14:19:04.334337 1558425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 14:19:04.334433 1558425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 14:19:04.336040 1558425 out.go:235]   - Booting up control plane ...
	I0630 14:19:04.336158 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 14:19:04.336225 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 14:19:04.336291 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 14:19:04.336387 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 14:19:04.336461 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 14:19:04.336498 1558425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 14:19:04.336705 1558425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 14:19:04.336826 1558425 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 14:19:04.336898 1558425 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001501258s
	I0630 14:19:04.336999 1558425 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 14:19:04.337079 1558425 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.227:8443/livez
	I0630 14:19:04.337160 1558425 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 14:19:04.337266 1558425 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 14:19:04.337343 1558425 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.200262885s
	I0630 14:19:04.337437 1558425 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.075387862s
	I0630 14:19:04.337541 1558425 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001441935s
	I0630 14:19:04.337665 1558425 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 14:19:04.337791 1558425 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 14:19:04.337843 1558425 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 14:19:04.338003 1558425 kubeadm.go:310] [mark-control-plane] Marking the node addons-301682 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 14:19:04.338066 1558425 kubeadm.go:310] [bootstrap-token] Using token: anrlv2.kitz2ouxhot5qn5d
	I0630 14:19:04.339966 1558425 out.go:235]   - Configuring RBAC rules ...
	I0630 14:19:04.340101 1558425 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 14:19:04.340226 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 14:19:04.340408 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 14:19:04.340552 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 14:19:04.340686 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 14:19:04.340806 1558425 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 14:19:04.340905 1558425 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 14:19:04.340944 1558425 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 14:19:04.340984 1558425 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 14:19:04.340990 1558425 kubeadm.go:310] 
	I0630 14:19:04.341040 1558425 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 14:19:04.341045 1558425 kubeadm.go:310] 
	I0630 14:19:04.341135 1558425 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 14:19:04.341142 1558425 kubeadm.go:310] 
	I0630 14:19:04.341172 1558425 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 14:19:04.341223 1558425 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 14:19:04.341270 1558425 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 14:19:04.341276 1558425 kubeadm.go:310] 
	I0630 14:19:04.341322 1558425 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 14:19:04.341328 1558425 kubeadm.go:310] 
	I0630 14:19:04.341449 1558425 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 14:19:04.341467 1558425 kubeadm.go:310] 
	I0630 14:19:04.341541 1558425 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 14:19:04.341643 1558425 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 14:19:04.341707 1558425 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 14:19:04.341712 1558425 kubeadm.go:310] 
	I0630 14:19:04.341781 1558425 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 14:19:04.341846 1558425 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 14:19:04.341851 1558425 kubeadm.go:310] 
	I0630 14:19:04.341924 1558425 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342019 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 14:19:04.342038 1558425 kubeadm.go:310] 	--control-plane 
	I0630 14:19:04.342043 1558425 kubeadm.go:310] 
	I0630 14:19:04.342140 1558425 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 14:19:04.342157 1558425 kubeadm.go:310] 
	I0630 14:19:04.342225 1558425 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342331 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 14:19:04.342344 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:19:04.342353 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:19:04.344305 1558425 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 14:19:04.345962 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 14:19:04.358944 1558425 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 14:19:04.382550 1558425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 14:19:04.382682 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:04.382684 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-301682 minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=addons-301682 minikube.k8s.io/primary=true
	I0630 14:19:04.443025 1558425 ops.go:34] apiserver oom_adj: -16
	I0630 14:19:04.557859 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.058710 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.558655 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.058095 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.558920 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.058903 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.558782 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.058045 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.134095 1558425 kubeadm.go:1105] duration metric: took 3.751500145s to wait for elevateKubeSystemPrivileges
	I0630 14:19:08.134146 1558425 kubeadm.go:394] duration metric: took 15.365674649s to StartCluster
	I0630 14:19:08.134169 1558425 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.134310 1558425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:19:08.134819 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.135078 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 14:19:08.135086 1558425 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:19:08.135172 1558425 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0630 14:19:08.135355 1558425 addons.go:69] Setting yakd=true in profile "addons-301682"
	I0630 14:19:08.135370 1558425 addons.go:69] Setting default-storageclass=true in profile "addons-301682"
	I0630 14:19:08.135401 1558425 addons.go:69] Setting ingress=true in profile "addons-301682"
	I0630 14:19:08.135408 1558425 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-301682"
	I0630 14:19:08.135419 1558425 addons.go:69] Setting ingress-dns=true in profile "addons-301682"
	I0630 14:19:08.135425 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-301682"
	I0630 14:19:08.135433 1558425 addons.go:238] Setting addon ingress-dns=true in "addons-301682"
	I0630 14:19:08.135450 1558425 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135439 1558425 addons.go:69] Setting cloud-spanner=true in profile "addons-301682"
	I0630 14:19:08.135466 1558425 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-301682"
	I0630 14:19:08.135453 1558425 addons.go:69] Setting registry-creds=true in profile "addons-301682"
	I0630 14:19:08.135470 1558425 addons.go:238] Setting addon cloud-spanner=true in "addons-301682"
	I0630 14:19:08.135482 1558425 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-301682"
	I0630 14:19:08.135488 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135499 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-301682"
	I0630 14:19:08.135507 1558425 addons.go:238] Setting addon registry-creds=true in "addons-301682"
	I0630 14:19:08.135508 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135522 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135532 1558425 addons.go:69] Setting volcano=true in profile "addons-301682"
	I0630 14:19:08.135553 1558425 addons.go:238] Setting addon volcano=true in "addons-301682"
	I0630 14:19:08.135560 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135601 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135968 1558425 addons.go:69] Setting storage-provisioner=true in profile "addons-301682"
	I0630 14:19:08.135968 1558425 addons.go:69] Setting volumesnapshots=true in profile "addons-301682"
	I0630 14:19:08.135383 1558425 addons.go:238] Setting addon yakd=true in "addons-301682"
	I0630 14:19:08.135985 1558425 addons.go:238] Setting addon storage-provisioner=true in "addons-301682"
	I0630 14:19:08.135986 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135992 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135999 1558425 addons.go:69] Setting metrics-server=true in profile "addons-301682"
	I0630 14:19:08.136001 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135468 1558425 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:08.136013 1558425 addons.go:238] Setting addon metrics-server=true in "addons-301682"
	I0630 14:19:08.136018 1558425 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136026 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136033 1558425 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-301682"
	I0630 14:19:08.136033 1558425 addons.go:69] Setting registry=true in profile "addons-301682"
	I0630 14:19:08.136037 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136042 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136046 1558425 addons.go:238] Setting addon registry=true in "addons-301682"
	I0630 14:19:08.136053 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136053 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136063 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136333 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136344 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135988 1558425 addons.go:238] Setting addon volumesnapshots=true in "addons-301682"
	I0630 14:19:08.136373 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136380 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135392 1558425 addons.go:69] Setting gcp-auth=true in profile "addons-301682"
	I0630 14:19:08.136406 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135413 1558425 addons.go:238] Setting addon ingress=true in "addons-301682"
	I0630 14:19:08.136410 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136430 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136437 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136439 1558425 mustload.go:65] Loading cluster: addons-301682
	I0630 14:19:08.135985 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136376 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136021 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136019 1558425 addons.go:69] Setting inspektor-gadget=true in profile "addons-301682"
	I0630 14:19:08.136533 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136408 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136571 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136399 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136594 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136043 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136654 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136035 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135386 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136802 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136830 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136538 1558425 addons.go:238] Setting addon inspektor-gadget=true in "addons-301682"
	I0630 14:19:08.136860 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136968 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.137006 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.141678 1558425 out.go:177] * Verifying Kubernetes components...
	I0630 14:19:08.143558 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:19:08.149915 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.149982 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.150069 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.150111 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.153357 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.153432 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.165614 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0630 14:19:08.165858 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0630 14:19:08.166745 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.166906 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.167573 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167595 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.167730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167744 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.168231 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168297 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.168851 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.168901 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.173235 1558425 addons.go:238] Setting addon default-storageclass=true in "addons-301682"
	I0630 14:19:08.173294 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.173724 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.173785 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.184456 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0630 14:19:08.185663 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.186359 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.186383 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.186868 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.187481 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.187524 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.198676 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0630 14:19:08.199720 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0630 14:19:08.200624 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.201056 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0630 14:19:08.201384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.201425 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.201824 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.202320 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.202341 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.202767 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.203373 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.203425 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.203875 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.204017 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.204559 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.204608 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.204944 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.204958 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.205500 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.206106 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.206167 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.212484 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0630 14:19:08.213076 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.213762 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.213782 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.214717 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I0630 14:19:08.214882 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0630 14:19:08.215450 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.215549 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.216208 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216234 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216395 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216419 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216498 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.216551 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0630 14:19:08.217141 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.217198 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.218026 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218644 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218679 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0630 14:19:08.218965 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219098 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0630 14:19:08.219374 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.219416 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.219490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.219517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.219600 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219645 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.220038 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220058 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.220197 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220208 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.222722 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0630 14:19:08.222897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0630 14:19:08.223028 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.223845 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.223892 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.224072 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0630 14:19:08.224388 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36347
	I0630 14:19:08.224623 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.225142 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.225164 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.225248 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I0630 14:19:08.225593 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226043 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226641 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.226692 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.227826 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.228314 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.228351 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.228730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.228753 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.228834 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.228874 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0630 14:19:08.229220 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.229470 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.229681 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.229725 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.230097 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.230128 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.240167 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.240974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.241058 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0630 14:19:08.243477 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.243596 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0630 14:19:08.261647 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0630 14:19:08.261668 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I0630 14:19:08.261862 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0630 14:19:08.262201 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0630 14:19:08.261652 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I0630 14:19:08.261852 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0630 14:19:08.262971 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.263041 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263580 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263640 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263642 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263689 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263697 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263766 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263767 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264204 1558425 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0630 14:19:08.264700 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264710 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264910 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.264924 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265056 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265067 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265244 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265261 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265313 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265330 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265397 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265504 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265522 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265580 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265661 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265661 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:08.265674 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265689 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0630 14:19:08.265696 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265706 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265712 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.265940 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265988 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266721 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266732 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266787 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266802 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266873 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266885 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266892 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266920 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266927 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266935 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266948 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266963 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267095 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267169 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267219 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267412 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267464 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267868 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.267912 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.268375 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.268443 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.268484 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.269549 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.269597 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.270926 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.272833 1558425 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0630 14:19:08.274128 1558425 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.274146 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0630 14:19:08.274171 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.274859 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275064 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275721 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.276192 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275698 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.277235 1558425 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0630 14:19:08.277261 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 14:19:08.277735 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.277888 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0630 14:19:08.277911 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
	I0630 14:19:08.278583 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.278754 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.278813 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.278881 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0630 14:19:08.278897 1558425 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0630 14:19:08.278922 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279033 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.279041 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 14:19:08.279054 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279564 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.279577 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0630 14:19:08.279593 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279642 1558425 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.35
	I0630 14:19:08.281429 1558425 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:08.281448 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0630 14:19:08.281468 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.281533 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.282713 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.283764 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284087 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284228 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:08.284248 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0630 14:19:08.284269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.284461 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284503 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284726 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.284883 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.284950 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284965 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285137 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.285324 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.285515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.285599 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285736 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286034 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.286041 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286069 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286207 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.286615 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.286628 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286660 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286673 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.287215 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287232 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.287998 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287988 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288619 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288647 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.288829 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.288982 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289082 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.289115 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289387 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289495 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289954 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.289983 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.290152 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290230 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290347 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290431 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.291154 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.292418 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.292454 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.292433 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.292721 1558425 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-301682"
	I0630 14:19:08.292738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.292763 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.292887 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.293016 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.293150 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.293200 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.294549 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0630 14:19:08.296018 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0630 14:19:08.297203 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0630 14:19:08.298509 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0630 14:19:08.299741 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0630 14:19:08.301072 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0630 14:19:08.302287 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0630 14:19:08.303246 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0630 14:19:08.303926 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.304284 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0630 14:19:08.304575 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.304600 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.305069 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.305303 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.305513 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0630 14:19:08.305597 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0630 14:19:08.305646 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0630 14:19:08.308495 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0630 14:19:08.309009 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309265 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309301 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309500 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.309544 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309729 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.309915 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.310105 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.310445 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.310557 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.310962 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.310986 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312430 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.312542 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0630 14:19:08.312690 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.312715 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43567
	I0630 14:19:08.312896 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.312908 1558425 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:08.312914 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312922 1558425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 14:19:08.312899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.312950 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.312967 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35789
	I0630 14:19:08.313116 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.313130 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.313608 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.313798 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.314003 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314075 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.314701 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314761 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.314826 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.315163 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315447 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.315638 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315743 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.315801 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.316217 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.316239 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.316441 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.317458 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.317755 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.318404 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.318763 1558425 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0630 14:19:08.319446 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.319608 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.319686 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.319964 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.319978 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320265 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.320279 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:08.320350 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.320357 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320810 1558425 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0630 14:19:08.320976 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0630 14:19:08.321001 1558425 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0630 14:19:08.321024 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.321215 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0630 14:19:08.322277 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0630 14:19:08.322294 1558425 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0630 14:19:08.322314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323097 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323112 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.323135 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:08.323167 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.323175 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:08.323273 1558425 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0630 14:19:08.323158 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.323505 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323867 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0630 14:19:08.323883 1558425 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0630 14:19:08.323899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323920 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.323964 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0630 14:19:08.324118 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.324491 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.324603 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0630 14:19:08.324644 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.324757 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.325272 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.325293 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.327148 1558425 out.go:177]   - Using image docker.io/registry:3.0.0
	I0630 14:19:08.328448 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328463 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.328471 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0630 14:19:08.328485 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.328486 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0630 14:19:08.328506 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0630 14:19:08.328469 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.328555 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.329271 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329296 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329298 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329306 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.329427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329488 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329522 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329831 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329844 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329873 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329893 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.329908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329932 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.329965 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.330048 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330100 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330127 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.330233 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330571 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.330635 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330797 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.331366 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.331539 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.333151 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.333196 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.333924 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.333946 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.334093 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.334267 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.334413 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.334534 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.335093 1558425 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0630 14:19:08.336351 1558425 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:08.336368 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0630 14:19:08.336384 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.339580 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340100 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.340140 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.340523 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.340672 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.340813 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.350360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0630 14:19:08.350984 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.351790 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.351819 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.352186 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.352420 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.354260 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.356054 1558425 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0630 14:19:08.357435 1558425 out.go:177]   - Using image docker.io/busybox:stable
	I0630 14:19:08.358781 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:08.358803 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0630 14:19:08.358828 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.362552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.362966 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.362990 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.363100 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.363314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.363506 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.363630 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.439689 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 14:19:08.476644 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:19:08.843915 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.877498 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.886078 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0630 14:19:08.886117 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0630 14:19:08.911521 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.934599 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:09.020016 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:09.040482 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0630 14:19:09.040511 1558425 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0630 14:19:09.043569 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:09.148704 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:09.202814 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0630 14:19:09.202869 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0630 14:19:09.278194 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0630 14:19:09.278231 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0630 14:19:09.295189 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:09.295224 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0630 14:19:09.299217 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0630 14:19:09.299263 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0630 14:19:09.332360 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0630 14:19:09.332403 1558425 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0630 14:19:09.352402 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0630 14:19:09.352438 1558425 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0630 14:19:09.405398 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:09.451227 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:09.755506 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0630 14:19:09.755546 1558425 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0630 14:19:09.891227 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0630 14:19:09.891271 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0630 14:19:09.920129 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:09.920177 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0630 14:19:09.934092 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0630 14:19:09.934135 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0630 14:19:09.987104 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:09.987162 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0630 14:19:10.065936 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:10.412611 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0630 14:19:10.412651 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0630 14:19:10.472848 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:10.472884 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0630 14:19:10.534908 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:10.637801 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0630 14:19:10.637839 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0630 14:19:10.658361 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:10.787257 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0630 14:19:10.787289 1558425 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0630 14:19:10.989751 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:11.047653 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0630 14:19:11.047693 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0630 14:19:11.196682 1558425 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.196715 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0630 14:19:11.291758 1558425 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.852019855s)
	I0630 14:19:11.291806 1558425 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 14:19:11.291816 1558425 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.815128335s)
	I0630 14:19:11.292560 1558425 node_ready.go:35] waiting up to 6m0s for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314454 1558425 node_ready.go:49] node "addons-301682" is "Ready"
	I0630 14:19:11.314498 1558425 node_ready.go:38] duration metric: took 21.89293ms for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314515 1558425 api_server.go:52] waiting for apiserver process to appear ...
	I0630 14:19:11.314579 1558425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:19:11.614705 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0630 14:19:11.614735 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0630 14:19:11.736486 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0630 14:19:11.736514 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0630 14:19:11.778191 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.869515 1558425 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-301682" context rescaled to 1 replicas
	I0630 14:19:12.215816 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0630 14:19:12.215858 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0630 14:19:12.875440 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0630 14:19:12.875469 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0630 14:19:13.113763 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0630 14:19:13.113791 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0630 14:19:13.233897 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.233936 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0630 14:19:13.547481 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.908710 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.064741353s)
	I0630 14:19:13.908777 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.031226379s)
	I0630 14:19:13.908828 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908848 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908846 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.997298204s)
	I0630 14:19:13.908863 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908877 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908789 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908930 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908964 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.974334377s)
	I0630 14:19:13.908996 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909007 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909009 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.888949022s)
	I0630 14:19:13.909048 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909061 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909699 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.909716 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.909725 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909733 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910126 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910140 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910150 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910156 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910411 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910438 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910445 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910452 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910457 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910696 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910727 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910744 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910751 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910757 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.911970 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912059 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912080 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912106 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.912127 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.912244 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912321 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912376 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912399 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912409 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912423 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912436 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912476 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912487 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913952 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:15.489658 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0630 14:19:15.489718 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:15.493165 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493587 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:15.493623 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493976 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:15.494223 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:15.494515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:15.494707 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:15.765543 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0630 14:19:15.978232 1558425 addons.go:238] Setting addon gcp-auth=true in "addons-301682"
	I0630 14:19:15.978326 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:15.978844 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:15.978897 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:15.997982 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0630 14:19:15.998461 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:15.999138 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:15.999166 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:15.999618 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.000381 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:16.000428 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:16.018425 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0630 14:19:16.018996 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:16.019552 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:16.019578 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:16.020118 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.020378 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:16.022570 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:16.022848 1558425 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0630 14:19:16.022880 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:16.026200 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027053 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:16.027107 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027360 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:16.027605 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:16.027797 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:16.027986 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:16.771513 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.727888765s)
	I0630 14:19:16.771570 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.622822849s)
	I0630 14:19:16.771591 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771607 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771630 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771647 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771647 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.36619116s)
	I0630 14:19:16.771673 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771688 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771767 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.320503654s)
	I0630 14:19:16.771831 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771842 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.705862816s)
	I0630 14:19:16.771865 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771873 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771904 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.236967233s)
	I0630 14:19:16.771940 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771966 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771989 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.113597897s)
	I0630 14:19:16.772016 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772026 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772112 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.782331879s)
	I0630 14:19:16.772132 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772140 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772199 1558425 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.457605469s)
	I0630 14:19:16.772216 1558425 api_server.go:72] duration metric: took 8.637102064s to wait for apiserver process to appear ...
	I0630 14:19:16.772223 1558425 api_server.go:88] waiting for apiserver healthz status ...
	I0630 14:19:16.772245 1558425 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0630 14:19:16.771847 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772472 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772489 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772500 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772508 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772567 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772660 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772670 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772678 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772685 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772744 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772768 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772774 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772782 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772789 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773055 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773073 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773096 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773119 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773125 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773131 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773137 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773371 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773380 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773388 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773398 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773540 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773583 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773592 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773602 1558425 addons.go:479] Verifying addon registry=true in "addons-301682"
	I0630 14:19:16.773651 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773661 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773668 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773675 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773927 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773965 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774128 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774333 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774357 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774383 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774389 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774656 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774694 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774695 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774703 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774710 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774722 1558425 addons.go:479] Verifying addon ingress=true in "addons-301682"
	I0630 14:19:16.774767 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774700 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774931 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.774943 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.774797 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775055 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.775066 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.775086 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.775936 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775954 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776331 1558425 out.go:177] * Verifying ingress addon...
	I0630 14:19:16.776373 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776407 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776413 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776457 1558425 out.go:177] * Verifying registry addon...
	I0630 14:19:16.776565 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776586 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776591 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776599 1558425 addons.go:479] Verifying addon metrics-server=true in "addons-301682"
	I0630 14:19:16.776668 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776681 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.778466 1558425 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0630 14:19:16.779098 1558425 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-301682 service yakd-dashboard -n yakd-dashboard
	
	I0630 14:19:16.779694 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0630 14:19:16.788556 1558425 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0630 14:19:16.789906 1558425 api_server.go:141] control plane version: v1.33.2
	I0630 14:19:16.789941 1558425 api_server.go:131] duration metric: took 17.709666ms to wait for apiserver health ...
	I0630 14:19:16.789955 1558425 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 14:19:16.796628 1558425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0630 14:19:16.796662 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:16.796921 1558425 system_pods.go:59] 15 kube-system pods found
	I0630 14:19:16.796954 1558425 system_pods.go:61] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.796961 1558425 system_pods.go:61] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.796972 1558425 system_pods.go:61] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.796976 1558425 system_pods.go:61] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.796984 1558425 system_pods.go:61] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.796987 1558425 system_pods.go:61] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.796992 1558425 system_pods.go:61] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.796997 1558425 system_pods.go:61] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.797004 1558425 system_pods.go:61] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.797011 1558425 system_pods.go:61] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.797018 1558425 system_pods.go:61] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.797028 1558425 system_pods.go:61] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.797035 1558425 system_pods.go:61] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.797042 1558425 system_pods.go:61] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.797049 1558425 system_pods.go:61] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.797057 1558425 system_pods.go:74] duration metric: took 7.094316ms to wait for pod list to return data ...
	I0630 14:19:16.797068 1558425 default_sa.go:34] waiting for default service account to be created ...
	I0630 14:19:16.798790 1558425 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0630 14:19:16.798807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:16.809885 1558425 default_sa.go:45] found service account: "default"
	I0630 14:19:16.809914 1558425 default_sa.go:55] duration metric: took 12.83884ms for default service account to be created ...
	I0630 14:19:16.809925 1558425 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 14:19:16.818226 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.818251 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.818525 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.818587 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:16.818715 1558425 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0630 14:19:16.836146 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.836179 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.836489 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.836539 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.898260 1558425 system_pods.go:86] 15 kube-system pods found
	I0630 14:19:16.898321 1558425 system_pods.go:89] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.898334 1558425 system_pods.go:89] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.898347 1558425 system_pods.go:89] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.898355 1558425 system_pods.go:89] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.898364 1558425 system_pods.go:89] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.898371 1558425 system_pods.go:89] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.898380 1558425 system_pods.go:89] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.898390 1558425 system_pods.go:89] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.898398 1558425 system_pods.go:89] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.898406 1558425 system_pods.go:89] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.898431 1558425 system_pods.go:89] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.898443 1558425 system_pods.go:89] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.898451 1558425 system_pods.go:89] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.898461 1558425 system_pods.go:89] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.898471 1558425 system_pods.go:89] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.898485 1558425 system_pods.go:126] duration metric: took 88.551205ms to wait for k8s-apps to be running ...
	I0630 14:19:16.898500 1558425 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 14:19:16.898565 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:19:17.317126 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:17.374411 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.596164186s)
	W0630 14:19:17.374478 1558425 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.374547 1558425 retry.go:31] will retry after 162.408109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.425522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.537869 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:17.785630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.785674 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.306660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.306889 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.552015 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.004467325s)
	I0630 14:19:18.552194 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552225 1558425 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.529350239s)
	I0630 14:19:18.552276 1558425 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.653693225s)
	I0630 14:19:18.552302 1558425 system_svc.go:56] duration metric: took 1.653798008s WaitForService to wait for kubelet
	I0630 14:19:18.552318 1558425 kubeadm.go:578] duration metric: took 10.417201876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:19:18.552241 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552348 1558425 node_conditions.go:102] verifying NodePressure condition ...
	I0630 14:19:18.552645 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552664 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552675 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552686 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552919 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552936 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552948 1558425 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:18.554300 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:18.555232 1558425 out.go:177] * Verifying csi-hostpath-driver addon...
	I0630 14:19:18.556214 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0630 14:19:18.556827 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0630 14:19:18.557433 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0630 14:19:18.557459 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0630 14:19:18.596354 1558425 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 14:19:18.596393 1558425 node_conditions.go:123] node cpu capacity is 2
	I0630 14:19:18.596408 1558425 node_conditions.go:105] duration metric: took 44.050461ms to run NodePressure ...
	I0630 14:19:18.596422 1558425 start.go:241] waiting for startup goroutines ...
	I0630 14:19:18.603104 1558425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0630 14:19:18.603135 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:18.637868 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0630 14:19:18.637900 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0630 14:19:18.748099 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:18.748163 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0630 14:19:18.792604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.792626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.843691 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:19.062533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.282741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.282766 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:19.563538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.721889 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.183953285s)
	I0630 14:19:19.721971 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.721990 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.722705 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:19.722805 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.722841 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.722861 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.722870 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.723362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.723392 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.784854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.785087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.084451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.338994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.339229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.491192 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.647431709s)
	I0630 14:19:20.491275 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491294 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491664 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.491685 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.491696 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491704 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491987 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:20.492026 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.492052 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.493344 1558425 addons.go:479] Verifying addon gcp-auth=true in "addons-301682"
	I0630 14:19:20.495394 1558425 out.go:177] * Verifying gcp-auth addon...
	I0630 14:19:20.497751 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0630 14:19:20.544088 1558425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0630 14:19:20.544122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:20.616283 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.790338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.794229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.001876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.103156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.286215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:21.287404 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.501971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.603568 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.782426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.783543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.002607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.061769 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.283406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.283458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.501544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.563768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.782065 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.785105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.001506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.062272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.283151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.283566 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.501628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.782561 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.783298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.001778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.062179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.351397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.351533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:24.502302 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.560819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.783532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.783606 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.000665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.066861 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.283070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:25.283328 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.501446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.566260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.782894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.783547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.005011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.064792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.283606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.502271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.561300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.782991 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.783050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.001311 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.061332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.282733 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:27.284226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.501814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.562410 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.783241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.783497 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.002164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.060264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.282980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:28.283180 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.500523 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.560485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.783545 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.000985 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.061185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.282663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.282792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.500648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.560782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.782042 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.783619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.001946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.060881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.282133 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:30.283049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.500975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.782609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.782603 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.001534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.060703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.282157 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.283847 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:31.500628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.560669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.782294 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.782820 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.001862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.061034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.281959 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:32.282969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.501719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.783890 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.001382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.060618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:33.289955 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.501909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.782531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.784168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.003605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.060279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.282397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:34.282808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.613798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.614652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.782800 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.000818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.060998 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.282231 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.283653 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:35.509348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.560724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.783017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.001083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.060369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.702785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.703123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.703555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.706970 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:36.804241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.804456 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.001688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.061214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.282908 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.284915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:37.500826 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.560092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.782407 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.784106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.061107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:38.282046 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:38.283180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.501297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.563927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.189422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.189531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.190495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.191248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.282505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.282920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.500781 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.560685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.781821 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.001299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.071624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.283182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.283221 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:40.501026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.560313 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.783565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.783591 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.002088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.079056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.283365 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.283894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:41.501095 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.565670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.781792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.782774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.000619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.060899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:42.283068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.501445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.560361 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.783776 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.783964 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.001605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.060231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.284417 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:43.284499 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.501005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.560455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.782135 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.783795 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.001747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.061008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:44.281520 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:44.282610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.501859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.561166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.190446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.291473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291489 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.291572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.293575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.501432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.560935 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.782091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.783835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.001576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.060855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.281632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.282695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.500503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.560648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.781708 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.783401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.001349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.060664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.288991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.289151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.501378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.783679 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.783934 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.063640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.283018 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.288264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.501060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.782532 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.783014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.060136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.284470 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.284616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.501493 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.560740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.782176 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.783205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.001724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.061175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.285556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.285655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.501435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.561083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.782238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.783288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.001421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.060971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.312768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.312922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.501057 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.560396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.782795 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.783117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.001134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.060267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.283193 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.283291 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.502021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.560380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.783076 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.784387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.001939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.061183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.281990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:53.283259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.502028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.560640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.782501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.783649 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.001220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.061666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.282039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.283121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.501316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.560447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.783504 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.783727 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.000517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.061087 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.282418 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.283456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:55.502008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.560325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.783555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.001431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.060991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.282249 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.283767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.501025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.560838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.782271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.782994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.001527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.061065 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.283743 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.283956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:57.502182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.560567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.783238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.783763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.001345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.060462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.282685 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.282967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:58.501929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.561387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.782616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.783122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.001904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.282072 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:59.282798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.501590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.561148 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.783157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.783870 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.000897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.061506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.281697 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.282838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:00.500884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.561577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.783296 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.002271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.061072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.282434 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:01.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.501896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.561570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.782586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.782842 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.000727 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.282765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:02.282809 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.501507 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.782628 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.782871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.001603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.060848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.282653 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.283752 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.560629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.781639 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.782897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.283389 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.283730 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:04.500996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.783260 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.001555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.060738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.282896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.282927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.501053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.602159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.783741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.783966 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.001070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.060590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.282798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.500761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.560993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.784950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.785237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.001699 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.061334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.282883 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.283203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.502196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.561691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.783652 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.001648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.061773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.281568 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.283567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.502500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.561076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.782892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.783238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.001899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.282681 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.283009 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:09.501744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.561385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.782769 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.783806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.282325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:10.283050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.501741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.783200 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.001016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.060512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.283758 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.284197 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:11.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.560441 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.782907 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.783577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.001888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.060849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.282280 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:12.282418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.501807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.002304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.061129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.283315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.501972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.561333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.783487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.783655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.001242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.282022 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.283080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:14.501717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.560630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.781894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.782368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.282562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.282888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.500950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.560206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.782473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.783016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.001340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.283196 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:16.501224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.560432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.783077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.783121 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.281574 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.282511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:17.502499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.560896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.781956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.782624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.000392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.060943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.283184 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.283879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.501537 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.562926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.782451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.001149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.061264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.282752 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:19.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.560605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.782509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.782554 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.002254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.282485 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:20.500924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.561822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.002205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.060747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.282021 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.282563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.505254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.561819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.782724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.000999 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.060710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.281865 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:22.282163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.562175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.782908 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.782992 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.001604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.061218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.282416 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.282830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:23.501539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.562050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.784161 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.001477 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.060126 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.282030 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.283809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.501806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.602840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.782907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.000878 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.061123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.282013 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.283761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.504764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.606761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.782107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.782874 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.000621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.061556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.285974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.286315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.502580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.561105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.000735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.061233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.282071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:27.285152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.501573 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.561120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.782732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.782840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.000630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.060922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.283472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.501080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.560454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.782967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.782976 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.237835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.237889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.336150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.336331 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.501907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.602786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.782929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.001264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.060690 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.281762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.282475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.501884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.572349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.783064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.783109 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.002526 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.062561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:31.283179 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.501139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.560586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.784336 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.784346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.001433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.290054 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.291744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.500808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.568201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.782533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.001710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.282933 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.284426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.501589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.561081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.784027 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.784261 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.002823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.063430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.284309 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.285663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.500807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.561036 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.784211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.784213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.001454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.061492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.281525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.282364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.501644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.560943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.783199 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.783563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.002111 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.060708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.281535 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:36.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.861446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.861593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.965825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.966272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.061370 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.283380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:37.283513 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.501468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.561192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.785517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.786292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.061069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.284714 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.284846 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.502574 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.783069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.001928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.061873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.282406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.283481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.503169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.561098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.782813 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.783641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.002181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.060266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.282891 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.283849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.500843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.782926 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.783029 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.001321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.281798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.284037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.502572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.782285 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.001897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.283725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.283888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:42.501480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.561461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.782548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.782713 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.093940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.097843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.282818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.282819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:43.501106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.560130 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.782663 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.783944 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.001422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.060503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.281922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:44.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.501600 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.782953 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.001192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.060597 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.283117 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.501174 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.560528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.786937 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.787508 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.003194 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.061532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.283078 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.283645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.501606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.783577 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.061088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.282533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.501685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.783792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.783801 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.000652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.061347 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.282791 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.283149 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.501196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.560571 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.782724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.783665 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.001578 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.060917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.283443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.283529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.501548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.560886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.782606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.782806 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.001040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.060499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.282867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.283070 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.501307 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.782746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.782790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.000827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.061599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.281741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.282303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:51.501882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.561159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.782745 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.784064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.001127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.060734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.281924 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.282442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.501618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.560955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.001976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.060014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.283833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.283868 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.501946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.787788 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.788281 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.001841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.282587 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.282894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.501076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.560738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.783982 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.784379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.001546 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.061794 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.282534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:55.283165 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.501579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.560818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.001725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.282248 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.283345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:56.501508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.781927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.783218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.001706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.061118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.283582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.283762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:57.501038 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.560439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.783590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.783720 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.001746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.061827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.282480 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.282960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:58.501434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.561028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.781998 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.782879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.001764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.061200 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.282609 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:59.282747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.501377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.560960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.785243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.785330 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.001691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.061010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.283580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.561741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.784015 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.784091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.060981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.282859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:01.283036 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.501809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.561922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.782501 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.783709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.002244 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.061572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.284366 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.501516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.562167 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.782718 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.783603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.002195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.060569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.283492 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:03.501693 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.783852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.784006 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.000924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.061226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.282297 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.282987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.501089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.560458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.783361 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.001357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.060980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.282432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.284945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:05.501078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.560392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.782556 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.782745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.001356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.060485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.282979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:06.500697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.561446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.783120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.783258 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.001429 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.060755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.281892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.282422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.501870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.561285 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.783869 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.001179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.061434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.282620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:08.282643 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.501890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.561334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.782409 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.060624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.283843 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:09.500869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.561327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.786343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.786990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.001363 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.061669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.281724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.283241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.501499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.560382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.783379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.783703 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.006867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.061528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.282068 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.284097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:11.501425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.561482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.781830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.003000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.061220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.283490 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.283632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.502107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.560563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.786245 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.787717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.002660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.061638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.282127 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.283171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.501269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.560543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.783150 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.783156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.001885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.061206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.283314 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.283499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.505208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.782762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.003346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.282760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.284010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.501266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.560665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.781811 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.782474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.263325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.263338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.283738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.502117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.604450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.783760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.005983 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.105360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.500988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.560342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.782772 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.007857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.061140 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.283796 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.501209 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.560948 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.783319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.783461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.001371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.061031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.282807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:19.283969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.501517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.561032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.782932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.783012 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.005480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.060901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.283412 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:20.502027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.782626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.783395 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.001871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.061472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.283060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.283210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.782741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.783745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.001089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.060638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.283014 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.560933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.782511 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.783627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.001249 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.060586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.281968 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.282925 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.501824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.561702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.781838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.782821 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.000909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.061364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.282635 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.282833 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.500870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.561501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.783353 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.783411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.001919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.060593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.501682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.560920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.782607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.001990 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.062631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.281975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:26.283634 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.502337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.561388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.783873 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.000786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.061090 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.282519 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.283219 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:27.502098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.560684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.782103 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.782356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.001961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.283082 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:28.283091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.502080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.560369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.782819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.782888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.001300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.060528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.282927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:29.500881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.561931 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.782352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.001314 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.061754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:30.283911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.501691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.561708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.783505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.018759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.118123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.283780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.283813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:31.500732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.561257 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.782789 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.783857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.000941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.061352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.283225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.283376 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.502377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.560813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.782071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.782893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.001627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.061719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.282356 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.501995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.560218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.783100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.783628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.061301 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.282792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.283319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.502265 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.603312 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.783237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.001558 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.282165 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.501433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.782571 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.783567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.001993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.060500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.282630 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:36.282912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.501547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.561085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.783668 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.783838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.001644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.061735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.282616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.283047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.501624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.562291 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.783863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.784060 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.001210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.060997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.283100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.283242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.501949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.561400 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.783522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.783562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.001632 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.061775 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.283431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:39.283517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.502108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.782288 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.783100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.061613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.282272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.282780 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.782057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.783645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.002564 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.062621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.282271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:41.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.501391 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.562411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.783324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.783579 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.002705 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.061893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.282583 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.283671 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.502733 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.562940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.782853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.783073 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.001824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.062102 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.282830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.283751 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.501119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.560492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.784115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.784145 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.001522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.061345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.282831 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.283549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:44.503997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.607178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.782832 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.783717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.002427 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.061729 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.282878 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:45.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.501997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.783552 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.783659 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.001682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.062807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.282597 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.283939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.503275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.561513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.784613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.784911 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.061725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:47.283405 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.501322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.561186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.782927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.784021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.001774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.282175 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.283210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.502097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.561677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.783039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.001787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.071403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.282882 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.283702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.501062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.560808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.781892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.782731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.001262 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.282041 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:50.283114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.501527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.561365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.786406 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.786567 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.001808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.061553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.282657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:51.283296 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.501742 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.561178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.782922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.783680 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.061514 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.282067 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.282621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.502198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.561158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.782564 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.782792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.001035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.060667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.281989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.283220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.501930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.560987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.783173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.004903 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.061068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.281852 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.282368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.501595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.561905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.782333 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.783021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.001532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.060924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.281744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.282438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.501581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.561843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.783311 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.784241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.001655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.061418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.282846 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.501645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.562026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.782767 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.000993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.061640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.282555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:57.284099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.501478 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.561337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.001026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.061636 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.283771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:58.284039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.501701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.564159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.782721 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.783561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.001195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.062667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.286778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.287064 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:59.501183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.560532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.783236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.783406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.062563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.283855 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.284134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.564486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.782887 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.782984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.061955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.283003 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.283746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.501317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.560704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.782191 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.783094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.001320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.061973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.283076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.283282 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:02.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.561666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.783208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.783342 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.004810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.063284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.283432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.283755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:03.501473 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.782327 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.783798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.001354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.060898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.283327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.283635 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.501503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.560912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.782536 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.783678 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.001055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.284013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.501292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.782798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.001516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.061337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.283371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.502565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.562077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.783138 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.783697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.062329 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.282379 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.282968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.501169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.560984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.782268 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.784049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.001494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.061308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.283724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.284185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.502230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.560967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.783790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.783900 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.001053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.060828 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.283284 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.283806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.501109 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.560617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.783349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.001664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.061833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.283401 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:10.283402 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.501704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.560961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.783469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.783522 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.001757 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.061124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.283792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:11.283989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.501103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.560840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.782033 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.783604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.003374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.060433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.282976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.283110 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:12.501047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.783921 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.784167 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.002696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.063144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.282766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:13.282879 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.501555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.561637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.781893 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.782616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.001004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.283205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.283446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:14.501550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.562143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.783957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.784112 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.001423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.062033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.282424 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:15.501071 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.560348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.782780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.783648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.282525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:16.283260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.501360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.560258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.783827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.783875 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.001565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.060813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.283097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:17.501048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.560778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.781850 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.783463 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.002176 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.060602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.501844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.783600 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.783637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.002695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.061454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.282337 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:19.284196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.501898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.566207 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.783150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.783388 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.001915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.063129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.284273 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.285468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:20.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.560957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.785008 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.785055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.001554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.061007 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.290166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.290315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.607046 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.783112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.001610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.061225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.282696 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:22.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.501584 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.562703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.782599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.783389 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.002163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.283818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.283940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:23.501359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.561687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.781738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.783834 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.001106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.060840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.283144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.283159 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:24.501879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.561177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.784299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.784387 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.001461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.060909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.282763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.283372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:25.501554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.782472 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.002067 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.060538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.282323 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:26.284932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.501783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.561217 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.786385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.786624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.002328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.060923 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.283369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:27.502704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.561567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.783609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.001238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.061117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.283592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:28.283779 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.503754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.561835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.783295 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.783426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.061565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.284407 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.284751 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.501482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.561448 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.783747 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.000612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.061762 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.282244 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.282945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.501114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.561086 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.783309 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.783420 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.001952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.060101 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.282326 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.284221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:31.501777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.561372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.783156 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.783322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.002694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.061381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:32.284529 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.505575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.566298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.784512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.784864 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.001675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.060993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.283872 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.501278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.560542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.787772 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.787934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.001324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.060773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.282840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:34.502371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.560627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.783094 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.783413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.002904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.061777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.283934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.501100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.560247 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.784358 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.001812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.062616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.282087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:36.282661 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.500966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.562267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.783442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.001767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.061035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.282352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:37.501481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.562204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.782528 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.783035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.001204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.060871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.282324 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.283278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.501823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.562308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.784023 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.784618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.000984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.062203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.283474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.502760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.563797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.782847 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.782939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.061550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.281624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.282091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.501221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.560905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.782931 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.782945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.002061 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.061582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.283006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.283254 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:41.501580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.785372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.785518 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.001833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.064672 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.282529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.283845 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:42.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.783728 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.784425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.002525 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.061268 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.283438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.283504 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:43.501326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.561048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.782534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.782716 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.001543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.062385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.282669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.283862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:44.501191 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.562184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.002615 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.282873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.283074 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.501319 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.560538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.781794 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.783447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.002122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.060715 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.282111 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:46.282760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.501006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.560037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.784753 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.784785 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.001157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.060804 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.283335 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:47.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.782851 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.783119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.001360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.282370 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:48.283342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.501709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.783888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.784092 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.001883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.283083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:49.283344 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.501731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.782681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.000966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.060550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.283074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:50.501643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.561462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.783025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.002569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.063186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.283275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:51.283325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.501455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.560436 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.782975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.783423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.001631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.061667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.281818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.282342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.501284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.560864 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.782151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.782348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.007368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.060641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.283706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.284276 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:53.501189 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.560654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.782398 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.782656 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.002682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.061286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.282383 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.283815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.501271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.560549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.790530 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.790755 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.001308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.284397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.284413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:55.501771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.781963 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.782941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.000822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.061650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.283524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.283580 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.501667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.560681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.782151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.281690 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.283202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.501647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.561213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.782612 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.001789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.282211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:58.284618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.500839 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.561378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.784612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.784669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.000744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.062091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.660112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.664035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:59.664534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.665074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.782692 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.783576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.003476 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.061094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.285714 1558425 kapi.go:107] duration metric: took 3m43.507242469s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0630 14:23:00.286859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.502299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.561094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.001892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.061673 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.501245 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.005689 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.283736 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.501952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.783177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.002017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.061604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.500854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.561092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.783701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.063589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.283519 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.501728 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.566277 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.002269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:05.060852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.283974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.507100 1558425 kapi.go:107] duration metric: took 3m45.009344267s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0630 14:23:05.509228 1558425 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-301682 cluster.
	I0630 14:23:05.510978 1558425 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0630 14:23:05.512549 1558425 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0630 14:23:05.561380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.783374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.062392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.561684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.785144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.066028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.284562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.561973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.785021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.060666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.561745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.783877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.284091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.561492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.787449 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.062802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.284110 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.560730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.783003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.060643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.284380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.561869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.060853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.283759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.560457 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.784225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.061224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.560056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.783513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.061509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.283696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.561206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.784675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.061356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.284952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.784123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.061089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.786612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.061952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.284288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.561055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.061797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.783185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.061655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.285318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.561730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.782858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.061290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.284108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.560495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.783799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.060435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.560658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.784042 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.064259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.283397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.562304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.783790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.062882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.565989 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.061006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.284421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.561604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.783815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.060798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.283106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.572104 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.783229 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.283003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.783676 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.061789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.283647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.561595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.784152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.061056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.284078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.561025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.060975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.284112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.561034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.783332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.060612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.284928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.560487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.784282 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.061202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.283691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.561004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.783682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.283339 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.561471 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.783951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.060926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.283825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.563195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.783726 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.060359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.283321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.561124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.061349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.283415 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.784344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.061159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.283670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.562677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.783294 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.062782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.284848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.560236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.783962 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.060039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.283768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.560166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.782740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.060825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.284072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.561353 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.783269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.061500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.283553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.561115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.784062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.560612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.784453 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.061524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.283887 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.560352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.783080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.060608 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.283756 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.561250 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.783439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.061813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.284043 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.560423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.783723 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.062299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.283512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.562182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.783464 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.283290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.561127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.784143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.062746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.283685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.561750 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.783610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.061340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.284254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.783030 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.060658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.283841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.561356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.783263 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.061883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.561440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.783774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.060233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.561692 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.783771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.060778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.283008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.560248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.784031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.061426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.284243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.561964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.783354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.061484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.283980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.060942 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.284120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.782802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.059964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.283717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.560585 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.784927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.061040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.283344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.561904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.783533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.284877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.560774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.784163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.061765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.284774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.561857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.782773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.061141 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.283396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.561139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.783625 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.283747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.560949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.783456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.061482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.560735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.784827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.282806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.560671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.782706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.060646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.283286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.560657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.061560 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.283579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.561242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.783654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.061539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.283732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.560228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.783593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.061818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.561190 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.783368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.062755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.283379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.783976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.061115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.285316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.783381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.061707 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.560899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.783331 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.060911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.285242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.567687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.783399 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.284164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.561303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.784575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.062079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.283362 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.561544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.784026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.061171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.284055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.784816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.061671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.285032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.782955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.060555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.283695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.561223 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.784108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.061443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.283885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.560716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.783754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.061542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.282788 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.560770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.783579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.060318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.283045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.560843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.782930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.061222 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.282971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.783818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.060551 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.283550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.562179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.784378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.062214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.283320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.560609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.060891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.283079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.561022 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.783812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.060803 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.283620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.561450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.784169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.061522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.283646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.561354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.784907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.061231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.283357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.561047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.782954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.062644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.283870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.560460 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.783972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.061026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.283434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.560383 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.784236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.061863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.561072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.784790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.060929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.560849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.061044 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.283485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.560958 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.783343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.283256 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.560785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.783833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.063333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.561202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.783647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.060633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.283403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.561258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.783824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.560614 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.783666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.060343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.283562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.561179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.783181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.061128 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.284062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.560766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.783336 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.061890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.283765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.782988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.061782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.284045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.560892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.783646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.061732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.283168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.561039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.783011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.060663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.284034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.560401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.783929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.060886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.560898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.783070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.061272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.284495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.566045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.785033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.284857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.563055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.782917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.062050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.288461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.560836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.783182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.060851 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.282596 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.561215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.061881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.784227 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.061049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.283508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.560991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.783228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.061557 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.283945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.560814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.783480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.062151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.283328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.561147 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.061581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.284088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.561199 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.784000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.060829 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.283475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.783246 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.061297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.283184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.561060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.060947 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.284652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.560498 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.783783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.061342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.284840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.791617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.061618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.286833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.560475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.783629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.061136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.283837 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.562671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.783967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.060688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.283033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.560616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.783876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.060565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.283359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.561198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.783494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.062642 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.283954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.560177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.782981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.060549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.283643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.561232 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.783995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.060913 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.283540 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.561001 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.061494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.561423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.783816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.061121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.283938 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.560330 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.061253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.283468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.783656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.061451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.284555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.561027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.783118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.060941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.283486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.783987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.061469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.282865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.560230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.783905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.060919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.284341 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.561725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.061064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.283364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.061012 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.560317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.783830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.060685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.283378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.561716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.782965 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.061099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.282813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.783665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.061372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.282565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.561326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.783180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.060939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.283013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.783206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.283487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.560928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.779853 1558425 kapi.go:107] duration metric: took 6m0.000148464s to wait for kubernetes.io/minikube-addons=registry ...
	W0630 14:25:16.780114 1558425 out.go:270] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0630 14:25:17.061823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:17.560570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.061810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.557742 1558425 kapi.go:107] duration metric: took 6m0.000905607s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0630 14:25:18.557918 1558425 out.go:270] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0630 14:25:18.560047 1558425 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, ingress, gcp-auth
	I0630 14:25:18.561439 1558425 addons.go:514] duration metric: took 6m10.426236235s for enable addons: enabled=[cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots ingress gcp-auth]
	I0630 14:25:18.561506 1558425 start.go:246] waiting for cluster config update ...
	I0630 14:25:18.561537 1558425 start.go:255] writing updated cluster config ...
	I0630 14:25:18.561951 1558425 ssh_runner.go:195] Run: rm -f paused
	I0630 14:25:18.569844 1558425 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:18.574216 1558425 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.580161 1558425 pod_ready.go:94] pod "coredns-674b8bbfcf-gcxhf" is "Ready"
	I0630 14:25:18.580187 1558425 pod_ready.go:86] duration metric: took 5.939771ms for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.583580 1558425 pod_ready.go:83] waiting for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.589631 1558425 pod_ready.go:94] pod "etcd-addons-301682" is "Ready"
	I0630 14:25:18.589656 1558425 pod_ready.go:86] duration metric: took 6.047747ms for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.592675 1558425 pod_ready.go:83] waiting for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.598838 1558425 pod_ready.go:94] pod "kube-apiserver-addons-301682" is "Ready"
	I0630 14:25:18.598865 1558425 pod_ready.go:86] duration metric: took 6.165834ms for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.608664 1558425 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.974819 1558425 pod_ready.go:94] pod "kube-controller-manager-addons-301682" is "Ready"
	I0630 14:25:18.974852 1558425 pod_ready.go:86] duration metric: took 366.160564ms for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.183963 1558425 pod_ready.go:83] waiting for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.575199 1558425 pod_ready.go:94] pod "kube-proxy-cm28f" is "Ready"
	I0630 14:25:19.575240 1558425 pod_ready.go:86] duration metric: took 391.247311ms for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.774681 1558425 pod_ready.go:83] waiting for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.173968 1558425 pod_ready.go:94] pod "kube-scheduler-addons-301682" is "Ready"
	I0630 14:25:20.174011 1558425 pod_ready.go:86] duration metric: took 399.300804ms for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.174030 1558425 pod_ready.go:40] duration metric: took 1.603886991s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:20.223671 1558425 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 14:25:20.225538 1558425 out.go:177] * Done! kubectl is now configured to use "addons-301682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.094216043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293902094189037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76957a89-2a68-4daf-9bc4-ced2870bf1db name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.094947441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=200716f0-d7e7-401c-a7ca-da4a1a07748c name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.095003337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=200716f0-d7e7-401c-a7ca-da4a1a07748c name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.095622973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608862faed0c0e6f5206c70142f3721860dd5dc22f544d982e850e234de5a7f0,PodSandboxId:43868af5a7e43fcff04d95b0e60c4b31b9e26b455c2e4032e94e4c1797966944,Metadata:&ContainerMetadata{Name:gadget,Attempt:5,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a,State:CONTAINER_EXITED,CreatedAt:1751293776933959082,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mrnh4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f033c8a2-1ce7-4009-8b24-756b9f31550e,},Annotations:map[string]string{io.kubernetes.container.hash: 1446b873,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernet
es.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e
-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[s
tring]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&Contai
nerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953b
bd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c973657
8bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2
714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886c
abb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.po
d.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639
610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067
c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kub
e-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Im
ageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{
Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{
Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1
ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=200716f0-d7e7-401c-a7ca-da4a1a07748c name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.138987932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60a15832-b475-4653-8d56-ec20b9f6b303 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.139064667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60a15832-b475-4653-8d56-ec20b9f6b303 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.140333451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5de9a34-7d82-437a-be12-a5e52cd95663 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.144842708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293902144804833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5de9a34-7d82-437a-be12-a5e52cd95663 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.148970581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efb6d81f-87b4-4029-92f9-28bd7860532e name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.149134629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efb6d81f-87b4-4029-92f9-28bd7860532e name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.149785978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608862faed0c0e6f5206c70142f3721860dd5dc22f544d982e850e234de5a7f0,PodSandboxId:43868af5a7e43fcff04d95b0e60c4b31b9e26b455c2e4032e94e4c1797966944,Metadata:&ContainerMetadata{Name:gadget,Attempt:5,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a,State:CONTAINER_EXITED,CreatedAt:1751293776933959082,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mrnh4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f033c8a2-1ce7-4009-8b24-756b9f31550e,},Annotations:map[string]string{io.kubernetes.container.hash: 1446b873,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernet
es.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e
-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[s
tring]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&Contai
nerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953b
bd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c973657
8bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2
714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886c
abb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.po
d.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639
610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067
c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kub
e-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Im
ageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{
Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{
Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1
ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efb6d81f-87b4-4029-92f9-28bd7860532e name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.192905943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddad9ce8-5eb3-4b14-9891-05642ce40b00 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.192997725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddad9ce8-5eb3-4b14-9891-05642ce40b00 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.194326051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2286e050-ae6d-4a28-9843-0baa8b11d79f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.195740343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293902195710367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2286e050-ae6d-4a28-9843-0baa8b11d79f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.196378710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe698749-d919-473a-8650-8e28652af26c name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.196515635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe698749-d919-473a-8650-8e28652af26c name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.197379679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608862faed0c0e6f5206c70142f3721860dd5dc22f544d982e850e234de5a7f0,PodSandboxId:43868af5a7e43fcff04d95b0e60c4b31b9e26b455c2e4032e94e4c1797966944,Metadata:&ContainerMetadata{Name:gadget,Attempt:5,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a,State:CONTAINER_EXITED,CreatedAt:1751293776933959082,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mrnh4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f033c8a2-1ce7-4009-8b24-756b9f31550e,},Annotations:map[string]string{io.kubernetes.container.hash: 1446b873,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernet
es.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e
-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[s
tring]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&Contai
nerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953b
bd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c973657
8bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2
714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886c
abb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.po
d.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639
610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067
c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kub
e-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Im
ageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{
Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{
Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1
ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe698749-d919-473a-8650-8e28652af26c name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.237888594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=914bb178-951c-47c0-8689-1a63e7663e1c name=/runtime.v1.RuntimeService/Version
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.237991427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=914bb178-951c-47c0-8689-1a63e7663e1c name=/runtime.v1.RuntimeService/Version
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.240742309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=624029a8-e9b0-468c-abf1-cfb96360d86c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.241789297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293902241758022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=624029a8-e9b0-468c-abf1-cfb96360d86c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.242510715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e4212d5-643f-418d-bf08-42af8976ef8e name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.242616635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e4212d5-643f-418d-bf08-42af8976ef8e name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:31:42 addons-301682 crio[849]: time="2025-06-30 14:31:42.243480444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608862faed0c0e6f5206c70142f3721860dd5dc22f544d982e850e234de5a7f0,PodSandboxId:43868af5a7e43fcff04d95b0e60c4b31b9e26b455c2e4032e94e4c1797966944,Metadata:&ContainerMetadata{Name:gadget,Attempt:5,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a,State:CONTAINER_EXITED,CreatedAt:1751293776933959082,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mrnh4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f033c8a2-1ce7-4009-8b24-756b9f31550e,},Annotations:map[string]string{io.kubernetes.container.hash: 1446b873,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernet
es.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e
-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[s
tring]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&Contai
nerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953b
bd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c973657
8bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2
714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886c
abb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.po
d.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639
610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067
c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kub
e-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Im
ageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{
Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{
Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1
ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e4212d5-643f-418d-bf08-42af8976ef8e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	608862faed0c0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469                            2 minutes ago       Exited              gadget                                   5                   43868af5a7e43       gadget-mrnh4
	ccb1fec83c55c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   744d3a8558a51       busybox
	f4356fb8a203d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	505ec6a97e3e1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	0e8810b68e820       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            8 minutes ago       Running             liveness-probe                           0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	977ef3af77456       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           8 minutes ago       Running             hostpath                                 0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	12db79e5b741e       registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15                             8 minutes ago       Running             controller                               0                   e27c33843e336       ingress-nginx-controller-67687b59dd-hqql8
	5dfe9d02b1b1a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                9 minutes ago       Running             node-driver-registrar                    0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	470ef449849e9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   4736a1c095805       snapshot-controller-68b874b76f-m97pd
	90d1724e2a8e9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   10 minutes ago      Running             csi-external-health-monitor-controller   0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	089511c925cdb       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   901b27bd18ec3       snapshot-controller-68b874b76f-zvnk2
	c2e8c85ce8151       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              10 minutes ago      Running             csi-resizer                              0                   754958dc28d19       csi-hostpath-resizer-0
	ba49554ce7e85       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             10 minutes ago      Running             csi-attacher                             0                   ef302c090f9a8       csi-hostpath-attacher-0
	78d53c20b85a8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf                   11 minutes ago      Exited              patch                                    0                   4e975a881fa17       ingress-nginx-admission-patch-9xc5z
	1322675057a2e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             11 minutes ago      Running             local-path-provisioner                   0                   54b7dce23ad65       local-path-provisioner-76f89f99b5-gzp6b
	8394bba22fffd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf                   11 minutes ago      Exited              create                                   0                   7cdcf7a057d5a       ingress-nginx-admission-create-fnqjq
	87b37034569df       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              11 minutes ago      Running             registry-proxy                           0                   ab80df45e204e       registry-proxy-2dgr9
	aca5b14e1bc43       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             11 minutes ago      Running             minikube-ingress-dns                     0                   7f285ffa7ac9c       kube-ingress-dns-minikube
	70d635c9d667c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     12 minutes ago      Running             amd-gpu-device-plugin                    0                   3d37e16d91d2b       amd-gpu-device-plugin-g5z6w
	f3766ac202b89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             12 minutes ago      Running             storage-provisioner                      0                   97a7ca87e0fdb       storage-provisioner
	5aadabb8b1bfc       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                                                             12 minutes ago      Running             coredns                                  0                   78956e77203cb       coredns-674b8bbfcf-gcxhf
	f10061ba824c0       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                                                             12 minutes ago      Running             kube-proxy                               0                   b60868a950e81       kube-proxy-cm28f
	ccc99095a0e73       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e                                                                             12 minutes ago      Running             kube-apiserver                           0                   3b49e7f986574       kube-apiserver-addons-301682
	b4d0fe15b4640       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                                                             12 minutes ago      Running             kube-controller-manager                  0                   793d3507bd395       kube-controller-manager-addons-301682
	a117b554832ef       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                                                             12 minutes ago      Running             etcd                                     0                   ecf8d198683c7       etcd-addons-301682
	4e556fe1e25cc       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                                                             12 minutes ago      Running             kube-scheduler                           0                   d882c0c670fce       kube-scheduler-addons-301682
	
	
	==> coredns [5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2] <==
	[INFO] 10.244.0.7:50755 - 63517 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00019073s
	[INFO] 10.244.0.7:33083 - 32214 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000217766s
	[INFO] 10.244.0.7:33083 - 55851 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000294193s
	[INFO] 10.244.0.7:33083 - 56132 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084306s
	[INFO] 10.244.0.7:33083 - 18875 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000139064s
	[INFO] 10.244.0.7:33083 - 44306 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000096657s
	[INFO] 10.244.0.7:33083 - 4736 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00026046s
	[INFO] 10.244.0.7:33083 - 45602 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000096759s
	[INFO] 10.244.0.7:33083 - 55342 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000178196s
	[INFO] 10.244.0.7:42513 - 4700 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000194637s
	[INFO] 10.244.0.7:42513 - 64342 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00035116s
	[INFO] 10.244.0.7:42513 - 49258 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000130873s
	[INFO] 10.244.0.7:42513 - 36695 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000187162s
	[INFO] 10.244.0.7:42513 - 40379 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000079848s
	[INFO] 10.244.0.7:42513 - 33122 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000186816s
	[INFO] 10.244.0.7:42513 - 12660 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000086812s
	[INFO] 10.244.0.7:42513 - 19950 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000196841s
	[INFO] 10.244.0.7:35867 - 41300 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000218531s
	[INFO] 10.244.0.7:35867 - 26491 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00049793s
	[INFO] 10.244.0.7:35867 - 24741 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000134544s
	[INFO] 10.244.0.7:35867 - 16330 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00027048s
	[INFO] 10.244.0.7:35867 - 1644 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000107198s
	[INFO] 10.244.0.7:35867 - 24491 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00021317s
	[INFO] 10.244.0.7:35867 - 7843 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000158539s
	[INFO] 10.244.0.7:35867 - 16773 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000278566s
	
	
	==> describe nodes <==
	Name:               addons-301682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-301682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=addons-301682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-301682
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-301682"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-301682
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:31:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:19:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    addons-301682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3f7748b45e54c5d95a766f7ac118097
	  System UUID:                c3f7748b-45e5-4c5d-95a7-66f7ac118097
	  Boot ID:                    4dcad91c-eb4d-46c9-ae52-10be6c00fd59
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  gadget                      gadget-mrnh4                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-67687b59dd-hqql8                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 amd-gpu-device-plugin-g5z6w                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-674b8bbfcf-gcxhf                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-h4qg2                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-301682                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-301682                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-301682                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cm28f                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-301682                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-694bd45846-x8cnn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-creds-6b69cdcdd5-n9cld                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-2dgr9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-68b874b76f-m97pd                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-68b874b76f-zvnk2                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  local-path-storage          local-path-provisioner-76f89f99b5-gzp6b                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-301682 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-301682 event: Registered Node addons-301682 in Controller
	
	
	==> dmesg <==
	[  +0.030230] kauditd_printk_skb: 118 callbacks suppressed
	[  +3.981178] kauditd_printk_skb: 99 callbacks suppressed
	[ +14.133007] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.888041] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 14:20] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.101498] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:21] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.564016] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000063] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.018820] kauditd_printk_skb: 4 callbacks suppressed
	[Jun30 14:22] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.468740] kauditd_printk_skb: 33 callbacks suppressed
	[Jun30 14:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.720029] kauditd_printk_skb: 37 callbacks suppressed
	[Jun30 14:25] kauditd_printk_skb: 33 callbacks suppressed
	[  +3.578772] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.590938] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.177192] kauditd_printk_skb: 20 callbacks suppressed
	[Jun30 14:26] kauditd_printk_skb: 4 callbacks suppressed
	[ +46.460054] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:27] kauditd_printk_skb: 2 callbacks suppressed
	[ +35.275184] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:29] kauditd_printk_skb: 9 callbacks suppressed
	[ +22.041327] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:30] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb] <==
	{"level":"info","ts":"2025-06-30T14:21:16.254726Z","caller":"traceutil/trace.go:171","msg":"trace[347540210] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"200.343691ms","start":"2025-06-30T14:21:16.054373Z","end":"2025-06-30T14:21:16.254716Z","steps":["trace[347540210] 'agreement among raft nodes before linearized reading'  (duration: 200.191188ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:21:16.254998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.889254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:21:16.255051Z","caller":"traceutil/trace.go:171","msg":"trace[2072353184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"259.964064ms","start":"2025-06-30T14:21:15.995079Z","end":"2025-06-30T14:21:16.255043Z","steps":["trace[2072353184] 'agreement among raft nodes before linearized reading'  (duration: 259.892612ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:21:16.256094Z","caller":"traceutil/trace.go:171","msg":"trace[752785918] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"419.629539ms","start":"2025-06-30T14:21:15.836340Z","end":"2025-06-30T14:21:16.255969Z","steps":["trace[752785918] 'process raft request'  (duration: 416.770167ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:21:16.256259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:21:15.836292Z","time spent":"419.882706ms","remote":"127.0.0.1:55816","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1189 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-06-30T14:22:57.074171Z","caller":"traceutil/trace.go:171","msg":"trace[97580462] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"235.032412ms","start":"2025-06-30T14:22:56.839110Z","end":"2025-06-30T14:22:57.074143Z","steps":["trace[97580462] 'process raft request'  (duration: 234.613297ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.462692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650406Z","caller":"traceutil/trace.go:171","msg":"trace[1036457483] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"155.081366ms","start":"2025-06-30T14:22:59.495275Z","end":"2025-06-30T14:22:59.650356Z","steps":["trace[1036457483] 'range keys from in-memory index tree'  (duration: 154.411147ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:22:59.650586Z","caller":"traceutil/trace.go:171","msg":"trace[806257844] transaction","detail":"{read_only:false; response_revision:1386; number_of_response:1; }","duration":"115.895314ms","start":"2025-06-30T14:22:59.534680Z","end":"2025-06-30T14:22:59.650576Z","steps":["trace[806257844] 'process raft request'  (duration: 113.707335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"485.393683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650888Z","caller":"traceutil/trace.go:171","msg":"trace[707366630] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1385; }","duration":"486.585604ms","start":"2025-06-30T14:22:59.164295Z","end":"2025-06-30T14:22:59.650881Z","steps":["trace[707366630] 'range keys from in-memory index tree'  (duration: 485.334873ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.650922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.164282Z","time spent":"486.621786ms","remote":"127.0.0.1:55612","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"374.09899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651010Z","caller":"traceutil/trace.go:171","msg":"trace[926388769] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"375.285797ms","start":"2025-06-30T14:22:59.275719Z","end":"2025-06-30T14:22:59.651005Z","steps":["trace[926388769] 'range keys from in-memory index tree'  (duration: 374.055569ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.275706Z","time spent":"375.316283ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.573265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651095Z","caller":"traceutil/trace.go:171","msg":"trace[444156936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"374.826279ms","start":"2025-06-30T14:22:59.276264Z","end":"2025-06-30T14:22:59.651090Z","steps":["trace[444156936] 'range keys from in-memory index tree'  (duration: 373.54342ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.276255Z","time spent":"374.850773ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.221471ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651162Z","caller":"traceutil/trace.go:171","msg":"trace[72079455] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1385; }","duration":"136.411789ms","start":"2025-06-30T14:22:59.514744Z","end":"2025-06-30T14:22:59.651156Z","steps":["trace[72079455] 'range keys from in-memory index tree'  (duration: 135.196228ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:25:50.156282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.241875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-06-30T14:25:50.156408Z","caller":"traceutil/trace.go:171","msg":"trace[1656189336] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1889; }","duration":"105.429353ms","start":"2025-06-30T14:25:50.050958Z","end":"2025-06-30T14:25:50.156387Z","steps":["trace[1656189336] 'range keys from in-memory index tree'  (duration: 105.167742ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:28:59.297152Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1538}
	{"level":"info","ts":"2025-06-30T14:28:59.333481Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1538,"took":"35.184312ms","hash":3459685430,"current-db-size-bytes":7704576,"current-db-size":"7.7 MB","current-db-size-in-use-bytes":4759552,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2025-06-30T14:28:59.333691Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3459685430,"revision":1538,"compact-revision":-1}
	
	
	==> kernel <==
	 14:31:42 up 13 min,  0 users,  load average: 0.41, 0.55, 0.55
	Linux addons-301682 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0630 14:20:17.020266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0630 14:20:17.020272       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0630 14:20:30.566598       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.249.255:443: connect: connection refused" logger="UnhandledError"
	W0630 14:20:30.568692       1 handler_proxy.go:99] no RequestInfo found in the context
	E0630 14:20:30.568788       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0630 14:20:30.592794       1 handler.go:288] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0630 14:20:30.602722       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0630 14:25:32.039384       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43658: use of closed network connection
	E0630 14:25:32.235328       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43690: use of closed network connection
	I0630 14:25:35.327796       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:40.911437       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0630 14:25:41.137079       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.71.181"}
	I0630 14:25:41.142822       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:41.721263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.215.125"}
	I0630 14:25:47.346218       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:26:31.606219       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0630 14:27:03.338971       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:27:51.135976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:00.946999       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:59.400677       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0] <==
	I0630 14:19:37.949773       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0630 14:19:37.949832       1 shared_informer.go:350] "Waiting for caches to sync" controller="resource quota"
	I0630 14:19:38.050496       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:19:38.384965       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0630 14:19:38.390227       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0630 14:19:38.491972       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0630 14:20:08.056813       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0630 14:20:08.499327       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0630 14:25:45.545454       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0630 14:27:13.514636       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E0630 14:28:33.141336       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.240744       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.265499       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.301094       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.355413       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.455220       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.632282       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.965912       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:34.621606       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:35.919826       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:38.493394       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:43.650839       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:53.905832       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:29:12.559067       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	I0630 14:29:58.729598       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b] <==
	E0630 14:19:09.616075       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:19:09.628197       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0630 14:19:09.628280       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:19:09.728584       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:19:09.728641       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:19:09.728663       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:19:09.760004       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:19:09.760419       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:19:09.760431       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:19:09.761800       1 config.go:199] "Starting service config controller"
	I0630 14:19:09.761820       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:19:09.764743       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:19:09.764796       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:19:09.764830       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:19:09.764834       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:19:09.770113       1 config.go:329] "Starting node config controller"
	I0630 14:19:09.770142       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:19:09.862889       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:19:09.865227       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:19:09.865265       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:19:09.870697       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627] <==
	E0630 14:19:00.996185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:19:00.996326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:19:00.996316       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:00.996403       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:00.996618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:00.998826       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:00.999006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:01.002700       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.002834       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:01.865362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:01.884714       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.908759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:01.937379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:19:01.938367       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:01.983087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:19:02.032891       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:19:02.058487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:19:02.131893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:02.191157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:02.310584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:02.326588       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:02.381605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0630 14:19:04.769814       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:30:55 addons-301682 kubelet[1543]: I0630 14:30:55.696980    1543 scope.go:117] "RemoveContainer" containerID="608862faed0c0e6f5206c70142f3721860dd5dc22f544d982e850e234de5a7f0"
	Jun 30 14:30:55 addons-301682 kubelet[1543]: E0630 14:30:55.697289    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-mrnh4_gadget(f033c8a2-1ce7-4009-8b24-756b9f31550e)\"" pod="gadget/gadget-mrnh4" podUID="f033c8a2-1ce7-4009-8b24-756b9f31550e"
	Jun 30 14:31:04 addons-301682 kubelet[1543]: E0630 14:31:04.120617    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293864120185179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:31:04 addons-301682 kubelet[1543]: E0630 14:31:04.120659    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293864120185179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:31:05 addons-301682 kubelet[1543]: I0630 14:31:05.697705    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-g5z6w" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:31:08 addons-301682 kubelet[1543]: I0630 14:31:08.695226    1543 scope.go:117] "RemoveContainer" containerID="608862faed0c0e6f5206c70142f3721860dd5dc22f544d982e850e234de5a7f0"
	Jun 30 14:31:08 addons-301682 kubelet[1543]: E0630 14:31:08.695888    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-mrnh4_gadget(f033c8a2-1ce7-4009-8b24-756b9f31550e)\"" pod="gadget/gadget-mrnh4" podUID="f033c8a2-1ce7-4009-8b24-756b9f31550e"
	Jun 30 14:31:10 addons-301682 kubelet[1543]: I0630 14:31:10.695479    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2dgr9" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:31:14 addons-301682 kubelet[1543]: E0630 14:31:14.126032    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293874124770205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:31:14 addons-301682 kubelet[1543]: E0630 14:31:14.126654    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293874124770205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:31:18 addons-301682 kubelet[1543]: E0630 14:31:18.530278    1543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:31:18 addons-301682 kubelet[1543]: E0630 14:31:18.530355    1543 kuberuntime_image.go:42] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:31:18 addons-301682 kubelet[1543]: E0630 14:31:18.530696    1543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcnmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(32226795-7a22-4935-b60c-8553d2716e86): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:31:18 addons-301682 kubelet[1543]: E0630 14:31:18.532028    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="32226795-7a22-4935-b60c-8553d2716e86"
	Jun 30 14:31:19 addons-301682 kubelet[1543]: E0630 14:31:19.264302    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="32226795-7a22-4935-b60c-8553d2716e86"
	Jun 30 14:31:20 addons-301682 kubelet[1543]: I0630 14:31:20.695789    1543 scope.go:117] "RemoveContainer" containerID="608862faed0c0e6f5206c70142f3721860dd5dc22f544d982e850e234de5a7f0"
	Jun 30 14:31:20 addons-301682 kubelet[1543]: E0630 14:31:20.696005    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-mrnh4_gadget(f033c8a2-1ce7-4009-8b24-756b9f31550e)\"" pod="gadget/gadget-mrnh4" podUID="f033c8a2-1ce7-4009-8b24-756b9f31550e"
	Jun 30 14:31:24 addons-301682 kubelet[1543]: E0630 14:31:24.131144    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293884129582155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:31:24 addons-301682 kubelet[1543]: E0630 14:31:24.131183    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293884129582155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:31:30 addons-301682 kubelet[1543]: E0630 14:31:30.241629    1543 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Jun 30 14:31:30 addons-301682 kubelet[1543]: E0630 14:31:30.242155    1543 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042a3494-2e07-4ce8-b9f8-7d37cf08138d-gcr-creds podName:042a3494-2e07-4ce8-b9f8-7d37cf08138d nodeName:}" failed. No retries permitted until 2025-06-30 14:33:32.242111963 +0000 UTC m=+868.698411433 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/042a3494-2e07-4ce8-b9f8-7d37cf08138d-gcr-creds") pod "registry-creds-6b69cdcdd5-n9cld" (UID: "042a3494-2e07-4ce8-b9f8-7d37cf08138d") : secret "registry-creds-gcr" not found
	Jun 30 14:31:34 addons-301682 kubelet[1543]: E0630 14:31:34.133809    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293894133461423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:31:34 addons-301682 kubelet[1543]: E0630 14:31:34.133847    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293894133461423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:31:34 addons-301682 kubelet[1543]: I0630 14:31:34.695804    1543 scope.go:117] "RemoveContainer" containerID="608862faed0c0e6f5206c70142f3721860dd5dc22f544d982e850e234de5a7f0"
	Jun 30 14:31:34 addons-301682 kubelet[1543]: E0630 14:31:34.696013    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-mrnh4_gadget(f033c8a2-1ce7-4009-8b24-756b9f31550e)\"" pod="gadget/gadget-mrnh4" podUID="f033c8a2-1ce7-4009-8b24-756b9f31550e"
	
	
	==> storage-provisioner [f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4] <==
	W0630 14:31:17.903461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:19.906927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:19.915742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:21.918800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:21.924520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:23.928335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:23.934700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:25.937893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:25.943730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:27.946793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:27.954779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:29.957585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:29.962838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:31.966724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:31.974343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:33.978417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:33.986657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:35.991010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:35.997112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:38.000352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:38.005785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:40.009211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:40.018189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:42.022504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:42.029460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301682 -n addons-301682
helpers_test.go:261: (dbg) Run:  kubectl --context addons-301682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn registry-creds-6b69cdcdd5-n9cld helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn registry-creds-6b69cdcdd5-n9cld helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn registry-creds-6b69cdcdd5-n9cld helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0: exit status 1 (101.902787ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301682/192.168.39.227
	Start Time:       Mon, 30 Jun 2025 14:25:41 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9gdz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9gdz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/nginx to addons-301682
	  Warning  Failed     4m53s                kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     95s (x3 over 4m53s)  kubelet            Error: ErrImagePull
	  Warning  Failed     95s (x2 over 3m44s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    69s (x4 over 4m52s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     69s (x4 over 4m52s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    54s (x4 over 6m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301682/192.168.39.227
	Start Time:       Mon, 30 Jun 2025 14:30:11 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcnmb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-jcnmb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  92s                default-scheduler  Successfully assigned default/task-pv-pod to addons-301682
	  Warning  Failed     25s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     25s                kubelet            Error: ErrImagePull
	  Normal   BackOff    24s                kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     24s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    12s (x2 over 91s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6l844 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-6l844:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fnqjq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9xc5z" not found
	Error from server (NotFound): pods "registry-694bd45846-x8cnn" not found
	Error from server (NotFound): pods "registry-creds-6b69cdcdd5-n9cld" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn registry-creds-6b69cdcdd5-n9cld helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (363.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-301682 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-301682 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-301682 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a7647f82-c5fc-422d-8b99-fe25edb95f59] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301682 -n addons-301682
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-06-30 14:33:41.431944962 +0000 UTC m=+964.519825768
addons_test.go:252: (dbg) Run:  kubectl --context addons-301682 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-301682 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-301682/192.168.39.227
Start Time:       Mon, 30 Jun 2025 14:25:41 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.25
IPs:
IP:  10.244.0.25
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9gdz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-f9gdz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m                   default-scheduler  Successfully assigned default/nginx to addons-301682
Warning  Failed     6m51s                kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m52s (x4 over 8m)   kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     80s (x4 over 6m51s)  kubelet            Error: ErrImagePull
Warning  Failed     80s (x3 over 5m42s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    11s (x9 over 6m50s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     11s (x9 over 6m50s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-301682 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-301682 logs nginx -n default: exit status 1 (80.195121ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-301682 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-301682 -n addons-301682
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 logs -n 25: (1.459007591s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | -o=json --download-only              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | -p download-only-781147              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | --download-only -p                   | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | binary-mirror-095233                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44619               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-095233              | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| addons  | disable dashboard -p                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| start   | -p addons-301682 --wait=true         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:25 UTC |
	|         | --memory=4096 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=registry-creds              |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | -p addons-301682                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:29 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | configure registry-creds -f          | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | ./testdata/addons_testconfig.json    |                      |         |         |                     |                     |
	|         | -p addons-301682                     |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | disable registry-creds               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:32 UTC | 30 Jun 25 14:33 UTC |
	|         | storage-provisioner-rancher          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:18:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:18:18.914659 1558425 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:18:18.914940 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.914950 1558425 out.go:358] Setting ErrFile to fd 2...
	I0630 14:18:18.914954 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.915163 1558425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:18:18.915795 1558425 out.go:352] Setting JSON to false
	I0630 14:18:18.916730 1558425 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":28791,"bootTime":1751264308,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:18:18.916865 1558425 start.go:140] virtualization: kvm guest
	I0630 14:18:18.918804 1558425 out.go:177] * [addons-301682] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:18:18.920591 1558425 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:18:18.920596 1558425 notify.go:220] Checking for updates...
	I0630 14:18:18.923430 1558425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:18:18.924993 1558425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:18:18.926449 1558425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:18.927916 1558425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:18:18.929158 1558425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:18:18.930609 1558425 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:18:18.965828 1558425 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 14:18:18.967229 1558425 start.go:304] selected driver: kvm2
	I0630 14:18:18.967249 1558425 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:18:18.967260 1558425 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:18:18.968055 1558425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.968161 1558425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:18:18.984884 1558425 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:18:18.984967 1558425 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:18:18.985269 1558425 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:18:18.985311 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:18.985360 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:18.985373 1558425 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:18:18.985492 1558425 start.go:347] cluster config:
	{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0630 14:18:18.985616 1558425 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.987784 1558425 out.go:177] * Starting "addons-301682" primary control-plane node in "addons-301682" cluster
	I0630 14:18:18.989175 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:18.989236 1558425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 14:18:18.989252 1558425 cache.go:56] Caching tarball of preloaded images
	I0630 14:18:18.989351 1558425 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 14:18:18.989366 1558425 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 14:18:18.989808 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:18.989840 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json: {Name:mk0b97369f17da476cd2a8393ae45d3ce84c94a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:18.990016 1558425 start.go:360] acquireMachinesLock for addons-301682: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 14:18:18.990075 1558425 start.go:364] duration metric: took 40.808µs to acquireMachinesLock for "addons-301682"
	I0630 14:18:18.990091 1558425 start.go:93] Provisioning new machine with config: &{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:18:18.990156 1558425 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 14:18:18.992039 1558425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0630 14:18:18.992210 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:18:18.992268 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:18:19.009360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0630 14:18:19.009944 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:18:19.010513 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:18:19.010538 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:18:19.010965 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:18:19.011233 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:19.011437 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:19.011652 1558425 start.go:159] libmachine.API.Create for "addons-301682" (driver="kvm2")
	I0630 14:18:19.011686 1558425 client.go:168] LocalClient.Create starting
	I0630 14:18:19.011737 1558425 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 14:18:19.156936 1558425 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 14:18:19.413430 1558425 main.go:141] libmachine: Running pre-create checks...
	I0630 14:18:19.413459 1558425 main.go:141] libmachine: (addons-301682) Calling .PreCreateCheck
	I0630 14:18:19.414009 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:19.414492 1558425 main.go:141] libmachine: Creating machine...
	I0630 14:18:19.414509 1558425 main.go:141] libmachine: (addons-301682) Calling .Create
	I0630 14:18:19.414658 1558425 main.go:141] libmachine: (addons-301682) creating KVM machine...
	I0630 14:18:19.414680 1558425 main.go:141] libmachine: (addons-301682) creating network...
	I0630 14:18:19.416107 1558425 main.go:141] libmachine: (addons-301682) DBG | found existing default KVM network
	I0630 14:18:19.416967 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.416813 1558447 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0630 14:18:19.417027 1558425 main.go:141] libmachine: (addons-301682) DBG | created network xml: 
	I0630 14:18:19.417047 1558425 main.go:141] libmachine: (addons-301682) DBG | <network>
	I0630 14:18:19.417058 1558425 main.go:141] libmachine: (addons-301682) DBG |   <name>mk-addons-301682</name>
	I0630 14:18:19.417065 1558425 main.go:141] libmachine: (addons-301682) DBG |   <dns enable='no'/>
	I0630 14:18:19.417074 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417083 1558425 main.go:141] libmachine: (addons-301682) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 14:18:19.417095 1558425 main.go:141] libmachine: (addons-301682) DBG |     <dhcp>
	I0630 14:18:19.417105 1558425 main.go:141] libmachine: (addons-301682) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 14:18:19.417114 1558425 main.go:141] libmachine: (addons-301682) DBG |     </dhcp>
	I0630 14:18:19.417134 1558425 main.go:141] libmachine: (addons-301682) DBG |   </ip>
	I0630 14:18:19.417161 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417196 1558425 main.go:141] libmachine: (addons-301682) DBG | </network>
	I0630 14:18:19.417211 1558425 main.go:141] libmachine: (addons-301682) DBG | 
	I0630 14:18:19.422966 1558425 main.go:141] libmachine: (addons-301682) DBG | trying to create private KVM network mk-addons-301682 192.168.39.0/24...
	I0630 14:18:19.504039 1558425 main.go:141] libmachine: (addons-301682) DBG | private KVM network mk-addons-301682 192.168.39.0/24 created
	I0630 14:18:19.504091 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.503994 1558447 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.504105 1558425 main.go:141] libmachine: (addons-301682) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.504121 1558425 main.go:141] libmachine: (addons-301682) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:18:19.504170 1558425 main.go:141] libmachine: (addons-301682) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 14:18:19.852642 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.852518 1558447 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa...
	I0630 14:18:19.994685 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994513 1558447 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk...
	I0630 14:18:19.994718 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing magic tar header
	I0630 14:18:19.994732 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing SSH key tar header
	I0630 14:18:19.994739 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994653 1558447 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.994842 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682
	I0630 14:18:19.994876 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 14:18:19.994890 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 (perms=drwx------)
	I0630 14:18:19.994904 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 14:18:19.994914 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 14:18:19.994928 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 14:18:19.994937 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 14:18:19.994950 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 14:18:19.994964 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.994974 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:19.994989 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 14:18:19.994999 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 14:18:19.995008 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins
	I0630 14:18:19.995017 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home
	I0630 14:18:19.995028 1558425 main.go:141] libmachine: (addons-301682) DBG | skipping /home - not owner
	I0630 14:18:19.996388 1558425 main.go:141] libmachine: (addons-301682) define libvirt domain using xml: 
	I0630 14:18:19.996417 1558425 main.go:141] libmachine: (addons-301682) <domain type='kvm'>
	I0630 14:18:19.996424 1558425 main.go:141] libmachine: (addons-301682)   <name>addons-301682</name>
	I0630 14:18:19.996429 1558425 main.go:141] libmachine: (addons-301682)   <memory unit='MiB'>4096</memory>
	I0630 14:18:19.996434 1558425 main.go:141] libmachine: (addons-301682)   <vcpu>2</vcpu>
	I0630 14:18:19.996437 1558425 main.go:141] libmachine: (addons-301682)   <features>
	I0630 14:18:19.996441 1558425 main.go:141] libmachine: (addons-301682)     <acpi/>
	I0630 14:18:19.996445 1558425 main.go:141] libmachine: (addons-301682)     <apic/>
	I0630 14:18:19.996450 1558425 main.go:141] libmachine: (addons-301682)     <pae/>
	I0630 14:18:19.996454 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996496 1558425 main.go:141] libmachine: (addons-301682)   </features>
	I0630 14:18:19.996523 1558425 main.go:141] libmachine: (addons-301682)   <cpu mode='host-passthrough'>
	I0630 14:18:19.996559 1558425 main.go:141] libmachine: (addons-301682)   
	I0630 14:18:19.996579 1558425 main.go:141] libmachine: (addons-301682)   </cpu>
	I0630 14:18:19.996596 1558425 main.go:141] libmachine: (addons-301682)   <os>
	I0630 14:18:19.996607 1558425 main.go:141] libmachine: (addons-301682)     <type>hvm</type>
	I0630 14:18:19.996615 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='cdrom'/>
	I0630 14:18:19.996623 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='hd'/>
	I0630 14:18:19.996628 1558425 main.go:141] libmachine: (addons-301682)     <bootmenu enable='no'/>
	I0630 14:18:19.996634 1558425 main.go:141] libmachine: (addons-301682)   </os>
	I0630 14:18:19.996639 1558425 main.go:141] libmachine: (addons-301682)   <devices>
	I0630 14:18:19.996646 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='cdrom'>
	I0630 14:18:19.996654 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/boot2docker.iso'/>
	I0630 14:18:19.996661 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hdc' bus='scsi'/>
	I0630 14:18:19.996666 1558425 main.go:141] libmachine: (addons-301682)       <readonly/>
	I0630 14:18:19.996672 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996677 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='disk'>
	I0630 14:18:19.996687 1558425 main.go:141] libmachine: (addons-301682)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 14:18:19.996710 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk'/>
	I0630 14:18:19.996729 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hda' bus='virtio'/>
	I0630 14:18:19.996742 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996753 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996766 1558425 main.go:141] libmachine: (addons-301682)       <source network='mk-addons-301682'/>
	I0630 14:18:19.996777 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996786 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996796 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996808 1558425 main.go:141] libmachine: (addons-301682)       <source network='default'/>
	I0630 14:18:19.996821 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996847 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996868 1558425 main.go:141] libmachine: (addons-301682)     <serial type='pty'>
	I0630 14:18:19.996884 1558425 main.go:141] libmachine: (addons-301682)       <target port='0'/>
	I0630 14:18:19.996899 1558425 main.go:141] libmachine: (addons-301682)     </serial>
	I0630 14:18:19.996909 1558425 main.go:141] libmachine: (addons-301682)     <console type='pty'>
	I0630 14:18:19.996918 1558425 main.go:141] libmachine: (addons-301682)       <target type='serial' port='0'/>
	I0630 14:18:19.996928 1558425 main.go:141] libmachine: (addons-301682)     </console>
	I0630 14:18:19.996938 1558425 main.go:141] libmachine: (addons-301682)     <rng model='virtio'>
	I0630 14:18:19.996951 1558425 main.go:141] libmachine: (addons-301682)       <backend model='random'>/dev/random</backend>
	I0630 14:18:19.996962 1558425 main.go:141] libmachine: (addons-301682)     </rng>
	I0630 14:18:19.996969 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996980 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996990 1558425 main.go:141] libmachine: (addons-301682)   </devices>
	I0630 14:18:19.997056 1558425 main.go:141] libmachine: (addons-301682) </domain>
	I0630 14:18:19.997083 1558425 main.go:141] libmachine: (addons-301682) 
	I0630 14:18:20.002436 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:4a:da:84 in network default
	I0630 14:18:20.002966 1558425 main.go:141] libmachine: (addons-301682) starting domain...
	I0630 14:18:20.002981 1558425 main.go:141] libmachine: (addons-301682) ensuring networks are active...
	I0630 14:18:20.002988 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:20.003928 1558425 main.go:141] libmachine: (addons-301682) Ensuring network default is active
	I0630 14:18:20.004377 1558425 main.go:141] libmachine: (addons-301682) Ensuring network mk-addons-301682 is active
	I0630 14:18:20.004924 1558425 main.go:141] libmachine: (addons-301682) getting domain XML...
	I0630 14:18:20.006331 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:21.490289 1558425 main.go:141] libmachine: (addons-301682) waiting for IP...
	I0630 14:18:21.491154 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.491628 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.491677 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.491627 1558447 retry.go:31] will retry after 227.981696ms: waiting for domain to come up
	I0630 14:18:21.721263 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.721780 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.721803 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.721737 1558447 retry.go:31] will retry after 379.046975ms: waiting for domain to come up
	I0630 14:18:22.102468 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.102921 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.102946 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.102870 1558447 retry.go:31] will retry after 342.349164ms: waiting for domain to come up
	I0630 14:18:22.446573 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.446984 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.447028 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.446972 1558447 retry.go:31] will retry after 471.24813ms: waiting for domain to come up
	I0630 14:18:22.920211 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.920789 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.920882 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.920792 1558447 retry.go:31] will retry after 708.674729ms: waiting for domain to come up
	I0630 14:18:23.631552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:23.632140 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:23.632158 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:23.632083 1558447 retry.go:31] will retry after 832.667186ms: waiting for domain to come up
	I0630 14:18:24.466597 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:24.467128 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:24.467188 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:24.467084 1558447 retry.go:31] will retry after 1.046318752s: waiting for domain to come up
	I0630 14:18:25.514952 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:25.515439 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:25.515467 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:25.515417 1558447 retry.go:31] will retry after 1.194063503s: waiting for domain to come up
	I0630 14:18:26.712109 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:26.712668 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:26.712736 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:26.712627 1558447 retry.go:31] will retry after 1.248422127s: waiting for domain to come up
	I0630 14:18:27.962423 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:27.962871 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:27.962904 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:27.962823 1558447 retry.go:31] will retry after 2.035519816s: waiting for domain to come up
	I0630 14:18:29.999626 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:30.000023 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:30.000122 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:30.000029 1558447 retry.go:31] will retry after 2.163487066s: waiting for domain to come up
	I0630 14:18:32.164834 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:32.165260 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:32.165289 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:32.165193 1558447 retry.go:31] will retry after 2.715279658s: waiting for domain to come up
	I0630 14:18:34.882095 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:34.882613 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:34.882651 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:34.882566 1558447 retry.go:31] will retry after 4.101409574s: waiting for domain to come up
	I0630 14:18:38.986670 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:38.987057 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:38.987115 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:38.987021 1558447 retry.go:31] will retry after 4.770477957s: waiting for domain to come up
	I0630 14:18:43.763775 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764289 1558425 main.go:141] libmachine: (addons-301682) found domain IP: 192.168.39.227
	I0630 14:18:43.764317 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has current primary IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764323 1558425 main.go:141] libmachine: (addons-301682) reserving static IP address...
	I0630 14:18:43.764708 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find host DHCP lease matching {name: "addons-301682", mac: "52:54:00:83:16:36", ip: "192.168.39.227"} in network mk-addons-301682
	I0630 14:18:43.852639 1558425 main.go:141] libmachine: (addons-301682) reserved static IP address 192.168.39.227 for domain addons-301682
	I0630 14:18:43.852672 1558425 main.go:141] libmachine: (addons-301682) DBG | Getting to WaitForSSH function...
	I0630 14:18:43.852679 1558425 main.go:141] libmachine: (addons-301682) waiting for SSH...
	I0630 14:18:43.855466 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855863 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.855913 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855970 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH client type: external
	I0630 14:18:43.856034 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa (-rw-------)
	I0630 14:18:43.856089 1558425 main.go:141] libmachine: (addons-301682) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 14:18:43.856119 1558425 main.go:141] libmachine: (addons-301682) DBG | About to run SSH command:
	I0630 14:18:43.856137 1558425 main.go:141] libmachine: (addons-301682) DBG | exit 0
	I0630 14:18:43.981627 1558425 main.go:141] libmachine: (addons-301682) DBG | SSH cmd err, output: <nil>: 
	I0630 14:18:43.981928 1558425 main.go:141] libmachine: (addons-301682) KVM machine creation complete
	I0630 14:18:43.982338 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:43.982966 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983226 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983462 1558425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 14:18:43.983477 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:18:43.984862 1558425 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 14:18:43.984878 1558425 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 14:18:43.984885 1558425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 14:18:43.984892 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:43.987532 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.987932 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.987959 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.988068 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:43.988288 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988434 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988572 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:43.988711 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:43.988940 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:43.988950 1558425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 14:18:44.093060 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.093094 1558425 main.go:141] libmachine: Detecting the provisioner...
	I0630 14:18:44.093103 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.096339 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096697 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.096721 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096934 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.097182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097449 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097610 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.097843 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.098060 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.098080 1558425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 14:18:44.202824 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 14:18:44.202946 1558425 main.go:141] libmachine: found compatible host: buildroot
	I0630 14:18:44.202959 1558425 main.go:141] libmachine: Provisioning with buildroot...
	I0630 14:18:44.202967 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203257 1558425 buildroot.go:166] provisioning hostname "addons-301682"
	I0630 14:18:44.203283 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.206655 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.206965 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.206989 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.207261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.207476 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207654 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207765 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.207928 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.208172 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.208189 1558425 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-301682 && echo "addons-301682" | sudo tee /etc/hostname
	I0630 14:18:44.326076 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-301682
	
	I0630 14:18:44.326120 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.329781 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330236 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.330271 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330493 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.330780 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331000 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331147 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.331319 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.331561 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.331583 1558425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-301682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-301682/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-301682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 14:18:44.442815 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.442853 1558425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 14:18:44.442872 1558425 buildroot.go:174] setting up certificates
	I0630 14:18:44.442886 1558425 provision.go:84] configureAuth start
	I0630 14:18:44.442963 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.443427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:44.446591 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447120 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.447146 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447411 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.449967 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450292 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.450314 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450474 1558425 provision.go:143] copyHostCerts
	I0630 14:18:44.450577 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 14:18:44.450730 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 14:18:44.450832 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 14:18:44.450922 1558425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.addons-301682 san=[127.0.0.1 192.168.39.227 addons-301682 localhost minikube]
	I0630 14:18:44.669777 1558425 provision.go:177] copyRemoteCerts
	I0630 14:18:44.669866 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 14:18:44.669906 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.673124 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673495 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.673530 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673760 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.674080 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.674291 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.674517 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:44.758379 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 14:18:44.788885 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 14:18:44.817666 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 14:18:44.847039 1558425 provision.go:87] duration metric: took 404.122435ms to configureAuth
	I0630 14:18:44.847076 1558425 buildroot.go:189] setting minikube options for container-runtime
	I0630 14:18:44.847582 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:18:44.847720 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.850359 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.850971 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.850998 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.851240 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.851500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851706 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851871 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.852084 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.852306 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.852322 1558425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 14:18:45.094141 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 14:18:45.094172 1558425 main.go:141] libmachine: Checking connection to Docker...
	I0630 14:18:45.094182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetURL
	I0630 14:18:45.095525 1558425 main.go:141] libmachine: (addons-301682) DBG | using libvirt version 6000000
	I0630 14:18:45.097995 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098457 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.098484 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098973 1558425 main.go:141] libmachine: Docker is up and running!
	I0630 14:18:45.098988 1558425 main.go:141] libmachine: Reticulating splines...
	I0630 14:18:45.098996 1558425 client.go:171] duration metric: took 26.087298039s to LocalClient.Create
	I0630 14:18:45.099029 1558425 start.go:167] duration metric: took 26.087375233s to libmachine.API.Create "addons-301682"
	I0630 14:18:45.099043 1558425 start.go:293] postStartSetup for "addons-301682" (driver="kvm2")
	I0630 14:18:45.099058 1558425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 14:18:45.099080 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.099385 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 14:18:45.099417 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.103070 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103476 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.103519 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.103974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.104154 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.104348 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.190062 1558425 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 14:18:45.194479 1558425 info.go:137] Remote host: Buildroot 2025.02
	I0630 14:18:45.194513 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 14:18:45.194584 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 14:18:45.194617 1558425 start.go:296] duration metric: took 95.564885ms for postStartSetup
	I0630 14:18:45.194655 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:45.195269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.198414 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.198916 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.198937 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.199225 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:45.199414 1558425 start.go:128] duration metric: took 26.209245344s to createHost
	I0630 14:18:45.199439 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.202677 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203657 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.203683 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203917 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.204167 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204389 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204594 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.204750 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:45.204952 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:45.204962 1558425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 14:18:45.310482 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751293125.283428942
	
	I0630 14:18:45.310513 1558425 fix.go:216] guest clock: 1751293125.283428942
	I0630 14:18:45.310540 1558425 fix.go:229] Guest: 2025-06-30 14:18:45.283428942 +0000 UTC Remote: 2025-06-30 14:18:45.199427216 +0000 UTC m=+26.326566099 (delta=84.001726ms)
	I0630 14:18:45.310570 1558425 fix.go:200] guest clock delta is within tolerance: 84.001726ms
	I0630 14:18:45.310578 1558425 start.go:83] releasing machines lock for "addons-301682", held for 26.320495243s
	I0630 14:18:45.310656 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.310928 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.313785 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314207 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.314241 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314506 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315123 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315340 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315461 1558425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 14:18:45.315505 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.315646 1558425 ssh_runner.go:195] Run: cat /version.json
	I0630 14:18:45.315683 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.318925 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319155 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319563 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319594 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319617 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319643 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319788 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.319877 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.320031 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320110 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320304 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320317 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320442 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.320501 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.399981 1558425 ssh_runner.go:195] Run: systemctl --version
	I0630 14:18:45.435607 1558425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 14:18:45.595593 1558425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 14:18:45.602291 1558425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 14:18:45.602374 1558425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 14:18:45.622229 1558425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 14:18:45.622263 1558425 start.go:495] detecting cgroup driver to use...
	I0630 14:18:45.622333 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 14:18:45.641226 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 14:18:45.658995 1558425 docker.go:230] disabling cri-docker service (if available) ...
	I0630 14:18:45.659074 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 14:18:45.675308 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 14:18:45.691780 1558425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 14:18:45.844773 1558425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 14:18:46.002067 1558425 docker.go:246] disabling docker service ...
	I0630 14:18:46.002163 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 14:18:46.018486 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 14:18:46.032711 1558425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 14:18:46.215507 1558425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 14:18:46.345437 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 14:18:46.361241 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 14:18:46.382182 1558425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 14:18:46.382265 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.393781 1558425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 14:18:46.393858 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.404879 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.415753 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.427101 1558425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 14:18:46.439585 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.450640 1558425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.469657 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.480995 1558425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 14:18:46.490960 1558425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 14:18:46.491038 1558425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 14:18:46.506162 1558425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 14:18:46.516885 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:46.649290 1558425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 14:18:46.754804 1558425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 14:18:46.754924 1558425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 14:18:46.760277 1558425 start.go:563] Will wait 60s for crictl version
	I0630 14:18:46.760374 1558425 ssh_runner.go:195] Run: which crictl
	I0630 14:18:46.764622 1558425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 14:18:46.806540 1558425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 14:18:46.806668 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.835571 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.870294 1558425 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 14:18:46.871793 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:46.874897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875281 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:46.875316 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875568 1558425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 14:18:46.880040 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:46.893844 1558425 kubeadm.go:875] updating cluster {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301
682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 14:18:46.894040 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:46.894098 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:46.928051 1558425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 14:18:46.928142 1558425 ssh_runner.go:195] Run: which lz4
	I0630 14:18:46.932106 1558425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 14:18:46.936459 1558425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 14:18:46.936498 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 14:18:48.250677 1558425 crio.go:462] duration metric: took 1.318609473s to copy over tarball
	I0630 14:18:48.250794 1558425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 14:18:50.229636 1558425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978807649s)
	I0630 14:18:50.229688 1558425 crio.go:469] duration metric: took 1.978978941s to extract the tarball
	I0630 14:18:50.229696 1558425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 14:18:50.268804 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:50.313787 1558425 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 14:18:50.313824 1558425 cache_images.go:84] Images are preloaded, skipping loading
	I0630 14:18:50.313836 1558425 kubeadm.go:926] updating node { 192.168.39.227 8443 v1.33.2 crio true true} ...
	I0630 14:18:50.313984 1558425 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-301682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 14:18:50.314108 1558425 ssh_runner.go:195] Run: crio config
	I0630 14:18:50.358762 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:50.358788 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:50.358799 1558425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 14:18:50.358821 1558425 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-301682 NodeName:addons-301682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 14:18:50.358985 1558425 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-301682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 14:18:50.359075 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 14:18:50.370269 1558425 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 14:18:50.370359 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 14:18:50.381422 1558425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0630 14:18:50.402864 1558425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 14:18:50.423535 1558425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0630 14:18:50.443802 1558425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0630 14:18:50.448073 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:50.462771 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:50.610565 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:18:50.641674 1558425 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682 for IP: 192.168.39.227
	I0630 14:18:50.641703 1558425 certs.go:194] generating shared ca certs ...
	I0630 14:18:50.641726 1558425 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.641917 1558425 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 14:18:50.775973 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt ...
	I0630 14:18:50.776127 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt: {Name:mk4a7e2f23df1877aa667a5fe9d149d87fa65b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776340 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key ...
	I0630 14:18:50.776353 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key: {Name:mkfe815a12ae8eded146419f42722ed747bb8cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776428 1558425 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 14:18:51.239699 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt ...
	I0630 14:18:51.239736 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt: {Name:mk010f91985630538e2436d654ff5b4cc759ab0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.239913 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key ...
	I0630 14:18:51.239969 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key: {Name:mk7a36f8a28748533897dd07634d8a5fe44a63a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.240059 1558425 certs.go:256] generating profile certs ...
	I0630 14:18:51.240131 1558425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key
	I0630 14:18:51.240150 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt with IP's: []
	I0630 14:18:51.635887 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt ...
	I0630 14:18:51.635927 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: {Name:mk22a67b2c0e90bc5dc67c34e330ee73fa799ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636119 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key ...
	I0630 14:18:51.636131 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key: {Name:mkbf3398b6d7cd5371d9a47d76e04eca4caef4d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636203 1558425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213
	I0630 14:18:51.636222 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I0630 14:18:52.292769 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 ...
	I0630 14:18:52.292809 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213: {Name:mk1402d3ac26fc5001a4011347c3552a378bda20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.292987 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 ...
	I0630 14:18:52.293001 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213: {Name:mkeaa6e21db5ae6cfb6b65c2ca90535340da5144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.293104 1558425 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt
	I0630 14:18:52.293196 1558425 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key
	I0630 14:18:52.293250 1558425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key
	I0630 14:18:52.293270 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt with IP's: []
	I0630 14:18:52.419123 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt ...
	I0630 14:18:52.419160 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt: {Name:mk3dd33047a5c3911a43a99bfac807aefa8e06f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419432 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key ...
	I0630 14:18:52.419460 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key: {Name:mk0d0b95d0dc825fc1e604461553530ed22a222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419680 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 14:18:52.419719 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 14:18:52.419744 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 14:18:52.419768 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 14:18:52.420585 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 14:18:52.463313 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 14:18:52.499004 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 14:18:52.526030 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 14:18:52.553220 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 14:18:52.581783 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 14:18:52.609656 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 14:18:52.639333 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0630 14:18:52.668789 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 14:18:52.696673 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 14:18:52.718151 1558425 ssh_runner.go:195] Run: openssl version
	I0630 14:18:52.724602 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 14:18:52.737181 1558425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742169 1558425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742231 1558425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.749342 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 14:18:52.762744 1558425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 14:18:52.768406 1558425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 14:18:52.768474 1558425 kubeadm.go:392] StartCluster: {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:18:52.768572 1558425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 14:18:52.768641 1558425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 14:18:52.812315 1558425 cri.go:89] found id: ""
	I0630 14:18:52.812437 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 14:18:52.824357 1558425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 14:18:52.837485 1558425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 14:18:52.850688 1558425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 14:18:52.850718 1558425 kubeadm.go:157] found existing configuration files:
	
	I0630 14:18:52.850770 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 14:18:52.862272 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 14:18:52.862353 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 14:18:52.874603 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 14:18:52.885384 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 14:18:52.885470 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 14:18:52.897341 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.908726 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 14:18:52.908791 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.920093 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 14:18:52.930423 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 14:18:52.930535 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 14:18:52.943582 1558425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 14:18:53.101493 1558425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 14:19:04.329808 1558425 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 14:19:04.329898 1558425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 14:19:04.330028 1558425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 14:19:04.330246 1558425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 14:19:04.330383 1558425 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 14:19:04.330478 1558425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 14:19:04.332630 1558425 out.go:235]   - Generating certificates and keys ...
	I0630 14:19:04.332731 1558425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 14:19:04.332810 1558425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 14:19:04.332905 1558425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 14:19:04.332972 1558425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 14:19:04.333024 1558425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 14:19:04.333069 1558425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 14:19:04.333119 1558425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 14:19:04.333250 1558425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333332 1558425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 14:19:04.333509 1558425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333623 1558425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 14:19:04.333739 1558425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 14:19:04.333816 1558425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 14:19:04.333868 1558425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 14:19:04.333909 1558425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 14:19:04.333955 1558425 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 14:19:04.334001 1558425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 14:19:04.334088 1558425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 14:19:04.334155 1558425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 14:19:04.334337 1558425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 14:19:04.334433 1558425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 14:19:04.336040 1558425 out.go:235]   - Booting up control plane ...
	I0630 14:19:04.336158 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 14:19:04.336225 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 14:19:04.336291 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 14:19:04.336387 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 14:19:04.336461 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 14:19:04.336498 1558425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 14:19:04.336705 1558425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 14:19:04.336826 1558425 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 14:19:04.336898 1558425 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001501258s
	I0630 14:19:04.336999 1558425 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 14:19:04.337079 1558425 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.227:8443/livez
	I0630 14:19:04.337160 1558425 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 14:19:04.337266 1558425 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 14:19:04.337343 1558425 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.200262885s
	I0630 14:19:04.337437 1558425 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.075387862s
	I0630 14:19:04.337541 1558425 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001441935s
	I0630 14:19:04.337665 1558425 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 14:19:04.337791 1558425 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 14:19:04.337843 1558425 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 14:19:04.338003 1558425 kubeadm.go:310] [mark-control-plane] Marking the node addons-301682 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 14:19:04.338066 1558425 kubeadm.go:310] [bootstrap-token] Using token: anrlv2.kitz2ouxhot5qn5d
	I0630 14:19:04.339966 1558425 out.go:235]   - Configuring RBAC rules ...
	I0630 14:19:04.340101 1558425 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 14:19:04.340226 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 14:19:04.340408 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 14:19:04.340552 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 14:19:04.340686 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 14:19:04.340806 1558425 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 14:19:04.340905 1558425 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 14:19:04.340944 1558425 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 14:19:04.340984 1558425 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 14:19:04.340990 1558425 kubeadm.go:310] 
	I0630 14:19:04.341040 1558425 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 14:19:04.341045 1558425 kubeadm.go:310] 
	I0630 14:19:04.341135 1558425 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 14:19:04.341142 1558425 kubeadm.go:310] 
	I0630 14:19:04.341172 1558425 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 14:19:04.341223 1558425 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 14:19:04.341270 1558425 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 14:19:04.341276 1558425 kubeadm.go:310] 
	I0630 14:19:04.341322 1558425 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 14:19:04.341328 1558425 kubeadm.go:310] 
	I0630 14:19:04.341449 1558425 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 14:19:04.341467 1558425 kubeadm.go:310] 
	I0630 14:19:04.341541 1558425 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 14:19:04.341643 1558425 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 14:19:04.341707 1558425 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 14:19:04.341712 1558425 kubeadm.go:310] 
	I0630 14:19:04.341781 1558425 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 14:19:04.341846 1558425 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 14:19:04.341851 1558425 kubeadm.go:310] 
	I0630 14:19:04.341924 1558425 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342019 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 14:19:04.342038 1558425 kubeadm.go:310] 	--control-plane 
	I0630 14:19:04.342043 1558425 kubeadm.go:310] 
	I0630 14:19:04.342140 1558425 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 14:19:04.342157 1558425 kubeadm.go:310] 
	I0630 14:19:04.342225 1558425 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342331 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 14:19:04.342344 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:19:04.342353 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:19:04.344305 1558425 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 14:19:04.345962 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 14:19:04.358944 1558425 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 14:19:04.382550 1558425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 14:19:04.382682 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:04.382684 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-301682 minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=addons-301682 minikube.k8s.io/primary=true
	I0630 14:19:04.443025 1558425 ops.go:34] apiserver oom_adj: -16
	I0630 14:19:04.557859 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.058710 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.558655 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.058095 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.558920 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.058903 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.558782 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.058045 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.134095 1558425 kubeadm.go:1105] duration metric: took 3.751500145s to wait for elevateKubeSystemPrivileges
	I0630 14:19:08.134146 1558425 kubeadm.go:394] duration metric: took 15.365674649s to StartCluster
	I0630 14:19:08.134169 1558425 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.134310 1558425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:19:08.134819 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.135078 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 14:19:08.135086 1558425 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:19:08.135172 1558425 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0630 14:19:08.135355 1558425 addons.go:69] Setting yakd=true in profile "addons-301682"
	I0630 14:19:08.135370 1558425 addons.go:69] Setting default-storageclass=true in profile "addons-301682"
	I0630 14:19:08.135401 1558425 addons.go:69] Setting ingress=true in profile "addons-301682"
	I0630 14:19:08.135408 1558425 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-301682"
	I0630 14:19:08.135419 1558425 addons.go:69] Setting ingress-dns=true in profile "addons-301682"
	I0630 14:19:08.135425 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-301682"
	I0630 14:19:08.135433 1558425 addons.go:238] Setting addon ingress-dns=true in "addons-301682"
	I0630 14:19:08.135450 1558425 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135439 1558425 addons.go:69] Setting cloud-spanner=true in profile "addons-301682"
	I0630 14:19:08.135466 1558425 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-301682"
	I0630 14:19:08.135453 1558425 addons.go:69] Setting registry-creds=true in profile "addons-301682"
	I0630 14:19:08.135470 1558425 addons.go:238] Setting addon cloud-spanner=true in "addons-301682"
	I0630 14:19:08.135482 1558425 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-301682"
	I0630 14:19:08.135488 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135499 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-301682"
	I0630 14:19:08.135507 1558425 addons.go:238] Setting addon registry-creds=true in "addons-301682"
	I0630 14:19:08.135508 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135522 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135532 1558425 addons.go:69] Setting volcano=true in profile "addons-301682"
	I0630 14:19:08.135553 1558425 addons.go:238] Setting addon volcano=true in "addons-301682"
	I0630 14:19:08.135560 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135601 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135968 1558425 addons.go:69] Setting storage-provisioner=true in profile "addons-301682"
	I0630 14:19:08.135968 1558425 addons.go:69] Setting volumesnapshots=true in profile "addons-301682"
	I0630 14:19:08.135383 1558425 addons.go:238] Setting addon yakd=true in "addons-301682"
	I0630 14:19:08.135985 1558425 addons.go:238] Setting addon storage-provisioner=true in "addons-301682"
	I0630 14:19:08.135986 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135992 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135999 1558425 addons.go:69] Setting metrics-server=true in profile "addons-301682"
	I0630 14:19:08.136001 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135468 1558425 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:08.136013 1558425 addons.go:238] Setting addon metrics-server=true in "addons-301682"
	I0630 14:19:08.136018 1558425 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136026 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136033 1558425 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-301682"
	I0630 14:19:08.136033 1558425 addons.go:69] Setting registry=true in profile "addons-301682"
	I0630 14:19:08.136037 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136042 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136046 1558425 addons.go:238] Setting addon registry=true in "addons-301682"
	I0630 14:19:08.136053 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136053 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136063 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136333 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136344 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135988 1558425 addons.go:238] Setting addon volumesnapshots=true in "addons-301682"
	I0630 14:19:08.136373 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136380 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135392 1558425 addons.go:69] Setting gcp-auth=true in profile "addons-301682"
	I0630 14:19:08.136406 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135413 1558425 addons.go:238] Setting addon ingress=true in "addons-301682"
	I0630 14:19:08.136410 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136430 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136437 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136439 1558425 mustload.go:65] Loading cluster: addons-301682
	I0630 14:19:08.135985 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136376 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136021 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136019 1558425 addons.go:69] Setting inspektor-gadget=true in profile "addons-301682"
	I0630 14:19:08.136533 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136408 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136571 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136399 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136594 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136043 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136654 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136035 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135386 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136802 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136830 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136538 1558425 addons.go:238] Setting addon inspektor-gadget=true in "addons-301682"
	I0630 14:19:08.136860 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136968 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.137006 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.141678 1558425 out.go:177] * Verifying Kubernetes components...
	I0630 14:19:08.143558 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:19:08.149915 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.149982 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.150069 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.150111 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.153357 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.153432 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.165614 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0630 14:19:08.165858 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0630 14:19:08.166745 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.166906 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.167573 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167595 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.167730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167744 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.168231 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168297 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.168851 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.168901 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.173235 1558425 addons.go:238] Setting addon default-storageclass=true in "addons-301682"
	I0630 14:19:08.173294 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.173724 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.173785 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.184456 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0630 14:19:08.185663 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.186359 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.186383 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.186868 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.187481 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.187524 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.198676 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0630 14:19:08.199720 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0630 14:19:08.200624 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.201056 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0630 14:19:08.201384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.201425 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.201824 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.202320 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.202341 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.202767 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.203373 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.203425 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.203875 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.204017 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.204559 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.204608 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.204944 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.204958 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.205500 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.206106 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.206167 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.212484 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0630 14:19:08.213076 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.213762 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.213782 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.214717 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I0630 14:19:08.214882 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0630 14:19:08.215450 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.215549 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.216208 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216234 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216395 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216419 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216498 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.216551 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0630 14:19:08.217141 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.217198 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.218026 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218644 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218679 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0630 14:19:08.218965 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219098 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0630 14:19:08.219374 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.219416 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.219490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.219517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.219600 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219645 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.220038 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220058 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.220197 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220208 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.222722 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0630 14:19:08.222897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0630 14:19:08.223028 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.223845 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.223892 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.224072 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0630 14:19:08.224388 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36347
	I0630 14:19:08.224623 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.225142 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.225164 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.225248 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I0630 14:19:08.225593 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226043 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226641 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.226692 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.227826 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.228314 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.228351 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.228730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.228753 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.228834 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.228874 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0630 14:19:08.229220 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.229470 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.229681 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.229725 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.230097 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.230128 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.240167 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.240974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.241058 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0630 14:19:08.243477 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.243596 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0630 14:19:08.261647 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0630 14:19:08.261668 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I0630 14:19:08.261862 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0630 14:19:08.262201 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0630 14:19:08.261652 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I0630 14:19:08.261852 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0630 14:19:08.262971 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.263041 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263580 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263640 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263642 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263689 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263697 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263766 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263767 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264204 1558425 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0630 14:19:08.264700 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264710 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264910 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.264924 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265056 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265067 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265244 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265261 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265313 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265330 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265397 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265504 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265522 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265580 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265661 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265661 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:08.265674 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265689 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0630 14:19:08.265696 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265706 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265712 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.265940 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265988 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266721 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266732 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266787 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266802 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266873 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266885 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266892 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266920 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266927 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266935 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266948 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266963 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267095 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267169 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267219 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267412 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267464 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267868 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.267912 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.268375 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.268443 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.268484 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.269549 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.269597 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.270926 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.272833 1558425 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0630 14:19:08.274128 1558425 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.274146 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0630 14:19:08.274171 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.274859 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275064 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275721 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.276192 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275698 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.277235 1558425 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0630 14:19:08.277261 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 14:19:08.277735 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.277888 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0630 14:19:08.277911 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
	I0630 14:19:08.278583 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.278754 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.278813 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.278881 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0630 14:19:08.278897 1558425 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0630 14:19:08.278922 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279033 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.279041 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 14:19:08.279054 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279564 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.279577 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0630 14:19:08.279593 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279642 1558425 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.35
	I0630 14:19:08.281429 1558425 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:08.281448 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0630 14:19:08.281468 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.281533 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.282713 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.283764 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284087 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284228 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:08.284248 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0630 14:19:08.284269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.284461 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284503 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284726 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.284883 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.284950 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284965 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285137 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.285324 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.285515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.285599 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285736 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286034 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.286041 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286069 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286207 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.286615 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.286628 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286660 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286673 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.287215 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287232 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.287998 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287988 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288619 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288647 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.288829 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.288982 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289082 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.289115 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289387 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289495 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289954 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.289983 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.290152 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290230 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290347 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290431 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.291154 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.292418 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.292454 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.292433 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.292721 1558425 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-301682"
	I0630 14:19:08.292738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.292763 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.292887 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.293016 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.293150 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.293200 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.294549 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0630 14:19:08.296018 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0630 14:19:08.297203 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0630 14:19:08.298509 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0630 14:19:08.299741 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0630 14:19:08.301072 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0630 14:19:08.302287 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0630 14:19:08.303246 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0630 14:19:08.303926 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.304284 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0630 14:19:08.304575 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.304600 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.305069 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.305303 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.305513 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0630 14:19:08.305597 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0630 14:19:08.305646 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0630 14:19:08.308495 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0630 14:19:08.309009 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309265 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309301 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309500 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.309544 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309729 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.309915 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.310105 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.310445 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.310557 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.310962 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.310986 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312430 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.312542 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0630 14:19:08.312690 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.312715 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43567
	I0630 14:19:08.312896 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.312908 1558425 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:08.312914 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312922 1558425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 14:19:08.312899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.312950 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.312967 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35789
	I0630 14:19:08.313116 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.313130 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.313608 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.313798 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.314003 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314075 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.314701 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314761 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.314826 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.315163 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315447 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.315638 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315743 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.315801 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.316217 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.316239 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.316441 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.317458 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.317755 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.318404 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.318763 1558425 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0630 14:19:08.319446 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.319608 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.319686 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.319964 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.319978 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320265 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.320279 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:08.320350 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.320357 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320810 1558425 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0630 14:19:08.320976 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0630 14:19:08.321001 1558425 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0630 14:19:08.321024 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.321215 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0630 14:19:08.322277 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0630 14:19:08.322294 1558425 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0630 14:19:08.322314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323097 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323112 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.323135 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:08.323167 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.323175 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:08.323273 1558425 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0630 14:19:08.323158 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.323505 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323867 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0630 14:19:08.323883 1558425 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0630 14:19:08.323899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323920 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.323964 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0630 14:19:08.324118 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.324491 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.324603 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0630 14:19:08.324644 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.324757 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.325272 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.325293 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.327148 1558425 out.go:177]   - Using image docker.io/registry:3.0.0
	I0630 14:19:08.328448 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328463 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.328471 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0630 14:19:08.328485 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.328486 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0630 14:19:08.328506 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0630 14:19:08.328469 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.328555 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.329271 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329296 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329298 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329306 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.329427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329488 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329522 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329831 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329844 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329873 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329893 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.329908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329932 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.329965 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.330048 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330100 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330127 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.330233 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330571 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.330635 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330797 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.331366 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.331539 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.333151 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.333196 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.333924 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.333946 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.334093 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.334267 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.334413 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.334534 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.335093 1558425 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0630 14:19:08.336351 1558425 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:08.336368 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0630 14:19:08.336384 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.339580 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340100 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.340140 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.340523 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.340672 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.340813 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.350360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0630 14:19:08.350984 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.351790 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.351819 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.352186 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.352420 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.354260 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.356054 1558425 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0630 14:19:08.357435 1558425 out.go:177]   - Using image docker.io/busybox:stable
	I0630 14:19:08.358781 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:08.358803 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0630 14:19:08.358828 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.362552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.362966 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.362990 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.363100 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.363314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.363506 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.363630 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.439689 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 14:19:08.476644 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:19:08.843915 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.877498 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.886078 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0630 14:19:08.886117 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0630 14:19:08.911521 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.934599 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:09.020016 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:09.040482 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0630 14:19:09.040511 1558425 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0630 14:19:09.043569 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:09.148704 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:09.202814 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0630 14:19:09.202869 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0630 14:19:09.278194 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0630 14:19:09.278231 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0630 14:19:09.295189 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:09.295224 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0630 14:19:09.299217 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0630 14:19:09.299263 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0630 14:19:09.332360 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0630 14:19:09.332403 1558425 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0630 14:19:09.352402 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0630 14:19:09.352438 1558425 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0630 14:19:09.405398 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:09.451227 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:09.755506 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0630 14:19:09.755546 1558425 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0630 14:19:09.891227 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0630 14:19:09.891271 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0630 14:19:09.920129 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:09.920177 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0630 14:19:09.934092 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0630 14:19:09.934135 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0630 14:19:09.987104 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:09.987162 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0630 14:19:10.065936 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:10.412611 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0630 14:19:10.412651 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0630 14:19:10.472848 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:10.472884 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0630 14:19:10.534908 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:10.637801 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0630 14:19:10.637839 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0630 14:19:10.658361 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:10.787257 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0630 14:19:10.787289 1558425 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0630 14:19:10.989751 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:11.047653 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0630 14:19:11.047693 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0630 14:19:11.196682 1558425 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.196715 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0630 14:19:11.291758 1558425 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.852019855s)
	I0630 14:19:11.291806 1558425 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 14:19:11.291816 1558425 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.815128335s)
	I0630 14:19:11.292560 1558425 node_ready.go:35] waiting up to 6m0s for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314454 1558425 node_ready.go:49] node "addons-301682" is "Ready"
	I0630 14:19:11.314498 1558425 node_ready.go:38] duration metric: took 21.89293ms for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314515 1558425 api_server.go:52] waiting for apiserver process to appear ...
	I0630 14:19:11.314579 1558425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:19:11.614705 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0630 14:19:11.614735 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0630 14:19:11.736486 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0630 14:19:11.736514 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0630 14:19:11.778191 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.869515 1558425 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-301682" context rescaled to 1 replicas
	I0630 14:19:12.215816 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0630 14:19:12.215858 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0630 14:19:12.875440 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0630 14:19:12.875469 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0630 14:19:13.113763 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0630 14:19:13.113791 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0630 14:19:13.233897 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.233936 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0630 14:19:13.547481 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.908710 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.064741353s)
	I0630 14:19:13.908777 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.031226379s)
	I0630 14:19:13.908828 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908848 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908846 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.997298204s)
	I0630 14:19:13.908863 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908877 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908789 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908930 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908964 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.974334377s)
	I0630 14:19:13.908996 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909007 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909009 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.888949022s)
	I0630 14:19:13.909048 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909061 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909699 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.909716 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.909725 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909733 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910126 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910140 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910150 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910156 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910411 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910438 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910445 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910452 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910457 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910696 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910727 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910744 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910751 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910757 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.911970 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912059 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912080 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912106 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.912127 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.912244 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912321 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912376 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912399 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912409 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912423 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912436 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912476 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912487 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913952 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:15.489658 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0630 14:19:15.489718 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:15.493165 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493587 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:15.493623 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493976 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:15.494223 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:15.494515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:15.494707 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:15.765543 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0630 14:19:15.978232 1558425 addons.go:238] Setting addon gcp-auth=true in "addons-301682"
	I0630 14:19:15.978326 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:15.978844 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:15.978897 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:15.997982 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0630 14:19:15.998461 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:15.999138 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:15.999166 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:15.999618 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.000381 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:16.000428 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:16.018425 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0630 14:19:16.018996 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:16.019552 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:16.019578 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:16.020118 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.020378 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:16.022570 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:16.022848 1558425 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0630 14:19:16.022880 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:16.026200 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027053 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:16.027107 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027360 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:16.027605 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:16.027797 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:16.027986 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:16.771513 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.727888765s)
	I0630 14:19:16.771570 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.622822849s)
	I0630 14:19:16.771591 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771607 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771630 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771647 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771647 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.36619116s)
	I0630 14:19:16.771673 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771688 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771767 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.320503654s)
	I0630 14:19:16.771831 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771842 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.705862816s)
	I0630 14:19:16.771865 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771873 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771904 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.236967233s)
	I0630 14:19:16.771940 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771966 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771989 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.113597897s)
	I0630 14:19:16.772016 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772026 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772112 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.782331879s)
	I0630 14:19:16.772132 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772140 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772199 1558425 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.457605469s)
	I0630 14:19:16.772216 1558425 api_server.go:72] duration metric: took 8.637102064s to wait for apiserver process to appear ...
	I0630 14:19:16.772223 1558425 api_server.go:88] waiting for apiserver healthz status ...
	I0630 14:19:16.772245 1558425 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0630 14:19:16.771847 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772472 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772489 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772500 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772508 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772567 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772660 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772670 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772678 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772685 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772744 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772768 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772774 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772782 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772789 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773055 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773073 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773096 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773119 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773125 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773131 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773137 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773371 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773380 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773388 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773398 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773540 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773583 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773592 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773602 1558425 addons.go:479] Verifying addon registry=true in "addons-301682"
	I0630 14:19:16.773651 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773661 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773668 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773675 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773927 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773965 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774128 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774333 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774357 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774383 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774389 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774656 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774694 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774695 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774703 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774710 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774722 1558425 addons.go:479] Verifying addon ingress=true in "addons-301682"
	I0630 14:19:16.774767 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774700 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774931 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.774943 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.774797 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775055 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.775066 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.775086 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.775936 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775954 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776331 1558425 out.go:177] * Verifying ingress addon...
	I0630 14:19:16.776373 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776407 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776413 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776457 1558425 out.go:177] * Verifying registry addon...
	I0630 14:19:16.776565 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776586 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776591 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776599 1558425 addons.go:479] Verifying addon metrics-server=true in "addons-301682"
	I0630 14:19:16.776668 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776681 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.778466 1558425 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0630 14:19:16.779098 1558425 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-301682 service yakd-dashboard -n yakd-dashboard
	
	I0630 14:19:16.779694 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0630 14:19:16.788556 1558425 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0630 14:19:16.789906 1558425 api_server.go:141] control plane version: v1.33.2
	I0630 14:19:16.789941 1558425 api_server.go:131] duration metric: took 17.709666ms to wait for apiserver health ...
	I0630 14:19:16.789955 1558425 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 14:19:16.796628 1558425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0630 14:19:16.796662 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:16.796921 1558425 system_pods.go:59] 15 kube-system pods found
	I0630 14:19:16.796954 1558425 system_pods.go:61] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.796961 1558425 system_pods.go:61] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.796972 1558425 system_pods.go:61] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.796976 1558425 system_pods.go:61] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.796984 1558425 system_pods.go:61] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.796987 1558425 system_pods.go:61] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.796992 1558425 system_pods.go:61] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.796997 1558425 system_pods.go:61] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.797004 1558425 system_pods.go:61] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.797011 1558425 system_pods.go:61] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.797018 1558425 system_pods.go:61] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.797028 1558425 system_pods.go:61] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.797035 1558425 system_pods.go:61] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.797042 1558425 system_pods.go:61] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.797049 1558425 system_pods.go:61] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.797057 1558425 system_pods.go:74] duration metric: took 7.094316ms to wait for pod list to return data ...
	I0630 14:19:16.797068 1558425 default_sa.go:34] waiting for default service account to be created ...
	I0630 14:19:16.798790 1558425 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0630 14:19:16.798807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:16.809885 1558425 default_sa.go:45] found service account: "default"
	I0630 14:19:16.809914 1558425 default_sa.go:55] duration metric: took 12.83884ms for default service account to be created ...
	I0630 14:19:16.809925 1558425 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 14:19:16.818226 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.818251 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.818525 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.818587 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:16.818715 1558425 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0630 14:19:16.836146 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.836179 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.836489 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.836539 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.898260 1558425 system_pods.go:86] 15 kube-system pods found
	I0630 14:19:16.898321 1558425 system_pods.go:89] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.898334 1558425 system_pods.go:89] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.898347 1558425 system_pods.go:89] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.898355 1558425 system_pods.go:89] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.898364 1558425 system_pods.go:89] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.898371 1558425 system_pods.go:89] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.898380 1558425 system_pods.go:89] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.898390 1558425 system_pods.go:89] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.898398 1558425 system_pods.go:89] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.898406 1558425 system_pods.go:89] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.898431 1558425 system_pods.go:89] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.898443 1558425 system_pods.go:89] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.898451 1558425 system_pods.go:89] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.898461 1558425 system_pods.go:89] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.898471 1558425 system_pods.go:89] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.898485 1558425 system_pods.go:126] duration metric: took 88.551205ms to wait for k8s-apps to be running ...
	I0630 14:19:16.898500 1558425 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 14:19:16.898565 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:19:17.317126 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:17.374411 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.596164186s)
	W0630 14:19:17.374478 1558425 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.374547 1558425 retry.go:31] will retry after 162.408109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.425522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.537869 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:17.785630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.785674 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.306660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.306889 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.552015 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.004467325s)
	I0630 14:19:18.552194 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552225 1558425 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.529350239s)
	I0630 14:19:18.552276 1558425 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.653693225s)
	I0630 14:19:18.552302 1558425 system_svc.go:56] duration metric: took 1.653798008s WaitForService to wait for kubelet
	I0630 14:19:18.552318 1558425 kubeadm.go:578] duration metric: took 10.417201876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:19:18.552241 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552348 1558425 node_conditions.go:102] verifying NodePressure condition ...
	I0630 14:19:18.552645 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552664 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552675 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552686 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552919 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552936 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552948 1558425 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:18.554300 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:18.555232 1558425 out.go:177] * Verifying csi-hostpath-driver addon...
	I0630 14:19:18.556214 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0630 14:19:18.556827 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0630 14:19:18.557433 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0630 14:19:18.557459 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0630 14:19:18.596354 1558425 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 14:19:18.596393 1558425 node_conditions.go:123] node cpu capacity is 2
	I0630 14:19:18.596408 1558425 node_conditions.go:105] duration metric: took 44.050461ms to run NodePressure ...
	I0630 14:19:18.596422 1558425 start.go:241] waiting for startup goroutines ...
	I0630 14:19:18.603104 1558425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0630 14:19:18.603135 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:18.637868 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0630 14:19:18.637900 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0630 14:19:18.748099 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:18.748163 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0630 14:19:18.792604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.792626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.843691 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:19.062533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.282741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.282766 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:19.563538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.721889 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.183953285s)
	I0630 14:19:19.721971 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.721990 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.722705 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:19.722805 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.722841 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.722861 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.722870 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.723362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.723392 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.784854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.785087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.084451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.338994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.339229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.491192 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.647431709s)
	I0630 14:19:20.491275 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491294 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491664 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.491685 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.491696 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491704 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491987 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:20.492026 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.492052 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.493344 1558425 addons.go:479] Verifying addon gcp-auth=true in "addons-301682"
	I0630 14:19:20.495394 1558425 out.go:177] * Verifying gcp-auth addon...
	I0630 14:19:20.497751 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0630 14:19:20.544088 1558425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0630 14:19:20.544122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:20.616283 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.790338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.794229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.001876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.103156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.286215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:21.287404 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.501971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.603568 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.782426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.783543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.002607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.061769 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.283406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.283458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.501544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.563768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.782065 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.785105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.001506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.062272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.283151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.283566 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.501628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.782561 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.783298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.001778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.062179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.351397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.351533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:24.502302 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.560819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.783532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.783606 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.000665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.066861 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.283070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:25.283328 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.501446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.566260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.782894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.783547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.005011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.064792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.283606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.502271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.561300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.782991 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.783050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.001311 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.061332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.282733 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:27.284226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.501814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.562410 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.783241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.783497 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.002164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.060264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.282980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:28.283180 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.500523 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.560485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.783545 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.000985 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.061185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.282663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.282792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.500648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.560782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.782042 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.783619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.001946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.060881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.282133 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:30.283049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.500975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.782609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.782603 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.001534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.060703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.282157 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.283847 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:31.500628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.560669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.782294 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.782820 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.001862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.061034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.281959 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:32.282969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.501719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.783890 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.001382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.060618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:33.289955 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.501909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.782531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.784168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.003605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.060279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.282397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:34.282808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.613798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.614652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.782800 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.000818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.060998 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.282231 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.283653 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:35.509348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.560724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.783017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.001083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.060369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.702785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.703123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.703555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.706970 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:36.804241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.804456 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.001688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.061214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.282908 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.284915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:37.500826 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.560092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.782407 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.784106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.061107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:38.282046 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:38.283180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.501297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.563927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.189422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.189531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.190495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.191248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.282505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.282920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.500781 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.560685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.781821 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.001299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.071624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.283182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.283221 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:40.501026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.560313 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.783565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.783591 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.002088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.079056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.283365 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.283894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:41.501095 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.565670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.781792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.782774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.000619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.060899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:42.283068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.501445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.560361 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.783776 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.783964 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.001605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.060231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.284417 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:43.284499 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.501005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.560455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.782135 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.783795 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.001747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.061008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:44.281520 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:44.282610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.501859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.561166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.190446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.291473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291489 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.291572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.293575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.501432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.560935 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.782091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.783835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.001576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.060855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.281632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.282695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.500503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.560648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.781708 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.783401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.001349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.060664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.288991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.289151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.501378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.783679 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.783934 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.063640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.283018 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.288264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.501060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.782532 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.783014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.060136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.284470 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.284616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.501493 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.560740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.782176 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.783205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.001724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.061175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.285556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.285655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.501435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.561083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.782238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.783288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.001421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.060971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.312768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.312922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.501057 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.560396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.782795 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.783117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.001134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.060267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.283193 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.283291 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.502021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.560380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.783076 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.784387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.001939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.061183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.281990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:53.283259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.502028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.560640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.782501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.783649 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.001220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.061666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.282039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.283121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.501316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.560447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.783504 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.783727 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.000517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.061087 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.282418 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.283456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:55.502008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.560325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.783555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.001431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.060991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.282249 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.283767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.501025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.560838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.782271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.782994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.001527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.061065 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.283743 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.283956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:57.502182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.560567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.783238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.783763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.001345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.060462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.282685 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.282967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:58.501929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.561387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.782616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.783122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.001904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.282072 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:59.282798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.501590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.561148 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.783157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.783870 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.000897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.061506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.281697 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.282838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:00.500884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.561577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.783296 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.002271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.061072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.282434 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:01.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.501896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.561570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.782586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.782842 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.000727 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.282765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:02.282809 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.501507 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.782628 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.782871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.001603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.060848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.282653 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.283752 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.560629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.781639 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.782897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.283389 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.283730 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:04.500996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.783260 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.001555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.060738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.282896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.282927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.501053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.602159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.783741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.783966 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.001070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.060590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.282798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.500761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.560993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.784950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.785237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.001699 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.061334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.282883 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.283203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.502196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.561691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.783652 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.001648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.061773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.281568 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.283567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.502500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.561076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.782892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.783238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.001899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.282681 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.283009 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:09.501744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.561385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.782769 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.783806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.282325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:10.283050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.501741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.783200 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.001016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.060512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.283758 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.284197 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:11.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.560441 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.782907 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.783577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.001888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.060849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.282280 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:12.282418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.501807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.002304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.061129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.283315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.501972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.561333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.783487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.783655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.001242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.282022 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.283080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:14.501717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.560630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.781894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.782368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.282562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.282888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.500950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.560206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.782473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.783016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.001340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.283196 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:16.501224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.560432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.783077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.783121 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.281574 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.282511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:17.502499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.560896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.781956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.782624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.000392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.060943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.283184 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.283879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.501537 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.562926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.782451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.001149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.061264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.282752 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:19.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.560605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.782509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.782554 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.002254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.282485 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:20.500924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.561822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.002205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.060747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.282021 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.282563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.505254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.561819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.782724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.000999 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.060710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.281865 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:22.282163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.562175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.782908 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.782992 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.001604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.061218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.282416 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.282830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:23.501539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.562050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.784161 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.001477 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.060126 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.282030 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.283809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.501806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.602840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.782907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.000878 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.061123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.282013 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.283761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.504764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.606761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.782107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.782874 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.000621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.061556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.285974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.286315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.502580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.561105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.000735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.061233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.282071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:27.285152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.501573 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.561120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.782732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.782840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.000630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.060922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.283472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.501080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.560454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.782967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.782976 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.237835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.237889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.336150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.336331 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.501907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.602786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.782929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.001264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.060690 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.281762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.282475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.501884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.572349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.783064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.783109 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.002526 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.062561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:31.283179 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.501139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.560586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.784336 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.784346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.001433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.290054 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.291744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.500808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.568201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.782533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.001710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.282933 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.284426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.501589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.561081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.784027 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.784261 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.002823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.063430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.284309 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.285663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.500807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.561036 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.784211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.784213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.001454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.061492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.281525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.282364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.501644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.560943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.783199 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.783563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.002111 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.060708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.281535 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:36.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.861446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.861593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.965825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.966272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.061370 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.283380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:37.283513 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.501468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.561192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.785517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.786292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.061069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.284714 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.284846 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.502574 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.783069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.001928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.061873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.282406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.283481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.503169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.561098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.782813 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.783641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.002181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.060266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.282891 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.283849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.500843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.782926 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.783029 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.001321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.281798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.284037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.502572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.782285 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.001897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.283725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.283888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:42.501480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.561461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.782548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.782713 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.093940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.097843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.282818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.282819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:43.501106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.560130 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.782663 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.783944 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.001422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.060503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.281922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:44.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.501600 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.782953 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.001192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.060597 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.283117 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.501174 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.560528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.786937 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.787508 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.003194 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.061532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.283078 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.283645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.501606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.783577 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.061088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.282533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.501685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.783792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.783801 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.000652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.061347 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.282791 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.283149 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.501196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.560571 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.782724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.783665 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.001578 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.060917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.283443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.283529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.501548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.560886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.782606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.782806 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.001040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.060499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.282867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.283070 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.501307 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.782746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.782790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.000827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.061599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.281741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.282303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:51.501882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.561159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.782745 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.784064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.001127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.060734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.281924 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.282442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.501618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.560955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.001976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.060014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.283833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.283868 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.501946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.787788 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.788281 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.001841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.282587 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.282894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.501076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.560738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.783982 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.784379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.001546 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.061794 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.282534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:55.283165 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.501579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.560818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.001725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.282248 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.283345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:56.501508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.781927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.783218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.001706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.061118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.283582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.283762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:57.501038 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.560439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.783590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.783720 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.001746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.061827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.282480 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.282960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:58.501434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.561028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.781998 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.782879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.001764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.061200 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.282609 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:59.282747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.501377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.560960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.785243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.785330 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.001691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.061010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.283580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.561741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.784015 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.784091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.060981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.282859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:01.283036 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.501809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.561922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.782501 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.783709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.002244 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.061572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.284366 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.501516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.562167 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.782718 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.783603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.002195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.060569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.283492 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:03.501693 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.783852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.784006 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.000924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.061226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.282297 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.282987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.501089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.560458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.783361 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.001357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.060980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.282432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.284945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:05.501078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.560392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.782556 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.782745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.001356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.060485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.282979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:06.500697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.561446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.783120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.783258 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.001429 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.060755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.281892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.282422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.501870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.561285 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.783869 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.001179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.061434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.282620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:08.282643 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.501890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.561334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.782409 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.060624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.283843 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:09.500869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.561327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.786343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.786990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.001363 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.061669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.281724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.283241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.501499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.560382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.783379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.783703 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.006867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.061528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.282068 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.284097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:11.501425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.561482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.781830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.003000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.061220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.283490 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.283632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.502107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.560563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.786245 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.787717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.002660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.061638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.282127 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.283171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.501269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.560543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.783150 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.783156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.001885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.061206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.283314 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.283499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.505208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.782762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.003346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.282760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.284010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.501266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.560665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.781811 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.782474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.263325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.263338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.283738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.502117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.604450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.783760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.005983 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.105360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.500988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.560342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.782772 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.007857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.061140 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.283796 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.501209 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.560948 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.783319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.783461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.001371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.061031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.282807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:19.283969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.501517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.561032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.782932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.783012 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.005480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.060901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.283412 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:20.502027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.782626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.783395 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.001871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.061472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.283060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.283210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.782741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.783745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.001089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.060638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.283014 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.560933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.782511 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.783627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.001249 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.060586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.281968 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.282925 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.501824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.561702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.781838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.782821 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.000909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.061364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.282635 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.282833 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.500870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.561501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.783353 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.783411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.001919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.060593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.501682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.560920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.782607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.001990 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.062631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.281975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:26.283634 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.502337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.561388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.783873 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.000786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.061090 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.282519 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.283219 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:27.502098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.560684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.782103 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.782356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.001961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.283082 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:28.283091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.502080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.560369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.782819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.782888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.001300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.060528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.282927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:29.500881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.561931 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.782352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.001314 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.061754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:30.283911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.501691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.561708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.783505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.018759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.118123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.283780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.283813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:31.500732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.561257 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.782789 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.783857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.000941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.061352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.283225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.283376 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.502377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.560813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.782071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.782893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.001627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.061719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.282356 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.501995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.560218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.783100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.783628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.061301 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.282792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.283319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.502265 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.603312 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.783237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.001558 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.282165 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.501433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.782571 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.783567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.001993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.060500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.282630 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:36.282912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.501547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.561085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.783668 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.783838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.001644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.061735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.282616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.283047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.501624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.562291 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.783863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.784060 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.001210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.060997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.283100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.283242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.501949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.561400 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.783522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.783562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.001632 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.061775 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.283431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:39.283517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.502108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.782288 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.783100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.061613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.282272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.282780 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.782057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.783645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.002564 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.062621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.282271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:41.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.501391 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.562411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.783324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.783579 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.002705 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.061893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.282583 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.283671 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.502733 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.562940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.782853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.783073 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.001824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.062102 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.282830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.283751 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.501119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.560492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.784115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.784145 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.001522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.061345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.282831 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.283549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:44.503997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.607178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.782832 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.783717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.002427 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.061729 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.282878 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:45.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.501997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.783552 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.783659 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.001682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.062807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.282597 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.283939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.503275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.561513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.784613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.784911 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.061725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:47.283405 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.501322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.561186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.782927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.784021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.001774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.282175 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.283210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.502097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.561677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.783039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.001787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.071403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.282882 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.283702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.501062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.560808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.781892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.782731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.001262 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.282041 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:50.283114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.501527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.561365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.786406 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.786567 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.001808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.061553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.282657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:51.283296 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.501742 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.561178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.782922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.783680 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.061514 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.282067 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.282621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.502198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.561158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.782564 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.782792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.001035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.060667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.281989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.283220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.501930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.560987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.783173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.004903 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.061068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.281852 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.282368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.501595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.561905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.782333 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.783021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.001532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.060924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.281744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.282438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.501581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.561843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.783311 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.784241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.001655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.061418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.282846 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.501645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.562026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.782767 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.000993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.061640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.282555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:57.284099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.501478 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.561337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.001026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.061636 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.283771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:58.284039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.501701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.564159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.782721 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.783561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.001195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.062667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.286778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.287064 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:59.501183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.560532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.783236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.783406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.062563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.283855 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.284134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.564486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.782887 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.782984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.061955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.283003 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.283746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.501317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.560704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.782191 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.783094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.001320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.061973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.283076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.283282 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:02.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.561666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.783208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.783342 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.004810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.063284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.283432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.283755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:03.501473 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.782327 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.783798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.001354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.060898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.283327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.283635 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.501503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.560912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.782536 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.783678 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.001055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.284013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.501292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.782798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.001516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.061337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.283371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.502565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.562077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.783138 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.783697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.062329 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.282379 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.282968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.501169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.560984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.782268 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.784049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.001494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.061308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.283724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.284185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.502230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.560967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.783790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.783900 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.001053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.060828 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.283284 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.283806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.501109 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.560617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.783349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.001664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.061833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.283401 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:10.283402 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.501704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.560961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.783469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.783522 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.001757 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.061124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.283792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:11.283989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.501103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.560840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.782033 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.783604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.003374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.060433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.282976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.283110 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:12.501047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.783921 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.784167 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.002696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.063144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.282766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:13.282879 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.501555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.561637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.781893 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.782616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.001004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.283205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.283446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:14.501550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.562143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.783957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.784112 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.001423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.062033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.282424 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:15.501071 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.560348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.782780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.783648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.282525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:16.283260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.501360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.560258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.783827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.783875 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.001565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.060813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.283097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:17.501048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.560778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.781850 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.783463 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.002176 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.060602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.501844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.783600 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.783637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.002695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.061454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.282337 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:19.284196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.501898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.566207 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.783150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.783388 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.001915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.063129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.284273 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.285468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:20.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.560957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.785008 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.785055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.001554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.061007 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.290166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.290315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.607046 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.783112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.001610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.061225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.282696 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:22.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.501584 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.562703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.782599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.783389 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.002163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.283818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.283940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:23.501359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.561687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.781738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.783834 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.001106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.060840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.283144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.283159 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:24.501879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.561177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.784299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.784387 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.001461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.060909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.282763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.283372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:25.501554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.782472 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.002067 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.060538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.282323 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:26.284932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.501783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.561217 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.786385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.786624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.002328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.060923 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.283369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:27.502704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.561567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.783609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.001238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.061117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.283592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:28.283779 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.503754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.561835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.783295 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.783426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.061565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.284407 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.284751 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.501482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.561448 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.783747 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.000612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.061762 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.282244 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.282945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.501114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.561086 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.783309 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.783420 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.001952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.060101 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.282326 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.284221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:31.501777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.561372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.783156 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.783322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.002694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.061381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:32.284529 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.505575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.566298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.784512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.784864 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.001675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.060993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.283872 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.501278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.560542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.787772 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.787934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.001324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.060773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.282840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:34.502371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.560627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.783094 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.783413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.002904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.061777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.283934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.501100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.560247 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.784358 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.001812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.062616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.282087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:36.282661 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.500966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.562267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.783442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.001767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.061035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.282352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:37.501481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.562204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.782528 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.783035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.001204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.060871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.282324 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.283278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.501823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.562308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.784023 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.784618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.000984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.062203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.283474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.502760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.563797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.782847 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.782939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.061550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.281624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.282091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.501221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.560905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.782931 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.782945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.002061 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.061582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.283006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.283254 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:41.501580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.785372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.785518 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.001833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.064672 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.282529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.283845 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:42.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.783728 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.784425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.002525 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.061268 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.283438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.283504 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:43.501326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.561048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.782534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.782716 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.001543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.062385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.282669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.283862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:44.501191 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.562184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.002615 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.282873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.283074 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.501319 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.560538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.781794 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.783447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.002122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.060715 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.282111 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:46.282760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.501006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.560037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.784753 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.784785 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.001157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.060804 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.283335 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:47.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.782851 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.783119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.001360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.282370 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:48.283342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.501709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.783888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.784092 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.001883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.283083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:49.283344 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.501731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.782681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.000966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.060550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.283074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:50.501643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.561462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.783025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.002569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.063186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.283275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:51.283325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.501455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.560436 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.782975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.783423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.001631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.061667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.281818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.282342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.501284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.560864 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.782151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.782348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.007368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.060641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.283706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.284276 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:53.501189 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.560654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.782398 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.782656 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.002682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.061286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.282383 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.283815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.501271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.560549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.790530 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.790755 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.001308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.284397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.284413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:55.501771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.781963 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.782941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.000822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.061650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.283524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.283580 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.501667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.560681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.782151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.281690 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.283202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.501647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.561213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.782612 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.001789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.282211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:58.284618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.500839 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.561378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.784612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.784669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.000744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.062091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.660112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.664035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:59.664534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.665074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.782692 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.783576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.003476 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.061094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.285714 1558425 kapi.go:107] duration metric: took 3m43.507242469s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0630 14:23:00.286859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.502299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.561094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.001892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.061673 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.501245 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.005689 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.283736 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.501952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.783177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.002017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.061604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.500854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.561092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.783701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.063589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.283519 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.501728 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.566277 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.002269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:05.060852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.283974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.507100 1558425 kapi.go:107] duration metric: took 3m45.009344267s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0630 14:23:05.509228 1558425 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-301682 cluster.
	I0630 14:23:05.510978 1558425 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0630 14:23:05.512549 1558425 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0630 14:23:05.561380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.783374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.062392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.561684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.785144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.066028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.284562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.561973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.785021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.060666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.561745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.783877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.284091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.561492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.787449 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.062802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.284110 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.560730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.783003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.060643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.284380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.561869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.060853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.283759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.560457 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.784225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.061224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.560056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.783513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.061509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.283696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.561206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.784675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.061356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.284952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.784123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.061089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.786612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.061952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.284288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.561055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.061797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.783185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.061655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.285318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.561730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.782858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.061290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.284108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.560495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.783799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.060435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.560658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.784042 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.064259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.283397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.562304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.783790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.062882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.565989 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.061006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.284421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.561604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.783815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.060798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.283106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.572104 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.783229 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.283003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.783676 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.061789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.283647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.561595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.784152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.061056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.284078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.561025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.060975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.284112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.561034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.783332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.060612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.284928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.560487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.784282 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.061202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.283691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.561004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.783682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.283339 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.561471 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.783951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.060926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.283825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.563195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.783726 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.060359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.283321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.561124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.061349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.283415 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.784344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.061159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.283670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.562677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.783294 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.062782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.284848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.560236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.783962 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.060039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.283768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.560166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.782740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.060825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.284072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.561353 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.783269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.061500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.283553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.561115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.784062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.560612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.784453 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.061524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.283887 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.560352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.783080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.060608 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.283756 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.561250 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.783439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.061813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.284043 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.560423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.783723 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.062299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.283512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.562182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.783464 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.283290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.561127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.784143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.062746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.283685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.561750 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.783610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.061340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.284254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.783030 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.060658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.283841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.561356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.783263 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.061883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.561440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.783774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.060233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.561692 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.783771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.060778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.283008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.560248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.784031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.061426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.284243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.561964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.783354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.061484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.283980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.060942 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.284120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.782802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.059964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.283717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.560585 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.784927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.061040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.283344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.561904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.783533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.284877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.560774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.784163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.061765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.284774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.561857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.782773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.061141 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.283396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.561139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.783625 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.283747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.560949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.783456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.061482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.560735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.784827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.282806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.560671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.782706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.060646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.283286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.560657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.061560 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.283579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.561242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.783654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.061539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.283732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.560228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.783593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.061818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.561190 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.783368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.062755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.283379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.783976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.061115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.285316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.783381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.061707 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.560899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.783331 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.060911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.285242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.567687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.783399 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.284164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.561303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.784575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.062079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.283362 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.561544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.784026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.061171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.284055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.784816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.061671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.285032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.782955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.060555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.283695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.561223 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.784108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.061443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.283885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.560716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.783754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.061542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.282788 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.560770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.783579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.060318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.283045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.560843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.782930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.061222 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.282971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.783818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.060551 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.283550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.562179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.784378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.062214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.283320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.560609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.060891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.283079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.561022 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.783812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.060803 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.283620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.561450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.784169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.061522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.283646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.561354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.784907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.061231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.283357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.561047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.782954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.062644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.283870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.560460 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.783972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.061026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.283434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.560383 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.784236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.061863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.561072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.784790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.060929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.560849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.061044 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.283485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.560958 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.783343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.283256 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.560785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.783833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.063333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.561202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.783647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.060633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.283403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.561258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.783824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.560614 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.783666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.060343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.283562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.561179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.783181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.061128 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.284062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.560766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.783336 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.061890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.283765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.782988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.061782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.284045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.560892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.783646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.061732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.283168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.561039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.783011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.060663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.284034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.560401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.783929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.060886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.560898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.783070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.061272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.284495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.566045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.785033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.284857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.563055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.782917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.062050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.288461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.560836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.783182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.060851 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.282596 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.561215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.061881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.784227 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.061049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.283508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.560991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.783228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.061557 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.283945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.560814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.783480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.062151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.283328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.561147 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.061581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.284088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.561199 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.784000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.060829 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.283475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.783246 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.061297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.283184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.561060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.060947 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.284652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.560498 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.783783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.061342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.284840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.791617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.061618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.286833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.560475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.783629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.061136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.283837 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.562671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.783967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.060688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.283033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.560616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.783876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.060565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.283359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.561198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.783494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.062642 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.283954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.560177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.782981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.060549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.283643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.561232 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.783995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.060913 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.283540 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.561001 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.061494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.561423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.783816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.061121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.283938 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.560330 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.061253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.283468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.783656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.061451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.284555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.561027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.783118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.060941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.283486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.783987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.061469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.282865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.560230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.783905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.060919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.284341 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.561725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.061064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.283364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.061012 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.560317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.783830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.060685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.283378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.561716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.782965 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.061099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.282813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.783665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.061372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.282565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.561326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.783180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.060939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.283013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.783206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.283487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.560928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.779853 1558425 kapi.go:107] duration metric: took 6m0.000148464s to wait for kubernetes.io/minikube-addons=registry ...
	W0630 14:25:16.780114 1558425 out.go:270] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0630 14:25:17.061823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:17.560570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.061810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.557742 1558425 kapi.go:107] duration metric: took 6m0.000905607s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0630 14:25:18.557918 1558425 out.go:270] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0630 14:25:18.560047 1558425 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, ingress, gcp-auth
	I0630 14:25:18.561439 1558425 addons.go:514] duration metric: took 6m10.426236235s for enable addons: enabled=[cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots ingress gcp-auth]
	I0630 14:25:18.561506 1558425 start.go:246] waiting for cluster config update ...
	I0630 14:25:18.561537 1558425 start.go:255] writing updated cluster config ...
	I0630 14:25:18.561951 1558425 ssh_runner.go:195] Run: rm -f paused
	I0630 14:25:18.569844 1558425 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:18.574216 1558425 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.580161 1558425 pod_ready.go:94] pod "coredns-674b8bbfcf-gcxhf" is "Ready"
	I0630 14:25:18.580187 1558425 pod_ready.go:86] duration metric: took 5.939771ms for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.583580 1558425 pod_ready.go:83] waiting for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.589631 1558425 pod_ready.go:94] pod "etcd-addons-301682" is "Ready"
	I0630 14:25:18.589656 1558425 pod_ready.go:86] duration metric: took 6.047747ms for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.592675 1558425 pod_ready.go:83] waiting for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.598838 1558425 pod_ready.go:94] pod "kube-apiserver-addons-301682" is "Ready"
	I0630 14:25:18.598865 1558425 pod_ready.go:86] duration metric: took 6.165834ms for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.608664 1558425 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.974819 1558425 pod_ready.go:94] pod "kube-controller-manager-addons-301682" is "Ready"
	I0630 14:25:18.974852 1558425 pod_ready.go:86] duration metric: took 366.160564ms for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.183963 1558425 pod_ready.go:83] waiting for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.575199 1558425 pod_ready.go:94] pod "kube-proxy-cm28f" is "Ready"
	I0630 14:25:19.575240 1558425 pod_ready.go:86] duration metric: took 391.247311ms for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.774681 1558425 pod_ready.go:83] waiting for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.173968 1558425 pod_ready.go:94] pod "kube-scheduler-addons-301682" is "Ready"
	I0630 14:25:20.174011 1558425 pod_ready.go:86] duration metric: took 399.300804ms for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.174030 1558425 pod_ready.go:40] duration metric: took 1.603886991s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:20.223671 1558425 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 14:25:20.225538 1558425 out.go:177] * Done! kubectl is now configured to use "addons-301682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.538182162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294022538156190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f0e91ac-3b27-4418-ae70-020cec6c866f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.538866308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94e7e9d4-fd1a-4fbf-a05d-83462b0a0ed7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.539026528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94e7e9d4-fd1a-4fbf-a05d-83462b0a0ed7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.540064049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-cont
roller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be14803
9a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e
956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container
.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.ku
bernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kube
rnetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794e
fa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6
cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289
d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6
949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94e7e9d4-fd1a-4fbf-a05d-83462b0a0ed7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.582624696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aac1d86e-75f3-42a8-9211-96f35ebe1402 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.582799332Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aac1d86e-75f3-42a8-9211-96f35ebe1402 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.584226438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7957d70-2aad-4623-a5df-d2ecbd3d46f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.585441052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294022585412134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7957d70-2aad-4623-a5df-d2ecbd3d46f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.586043165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07ce20ed-6516-4cb1-9361-f40cf8fbbb47 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.586107327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07ce20ed-6516-4cb1-9361-f40cf8fbbb47 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.586909116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-cont
roller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be14803
9a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e
956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container
.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.ku
bernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kube
rnetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794e
fa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6
cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289
d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6
949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07ce20ed-6516-4cb1-9361-f40cf8fbbb47 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.622476929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=452d323b-bb6e-49a6-a22a-e2955324d144 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.622662950Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=452d323b-bb6e-49a6-a22a-e2955324d144 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.623947198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1f5271a-d9f3-4a81-863f-c715b8cbc5aa name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.625087537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294022625063791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1f5271a-d9f3-4a81-863f-c715b8cbc5aa name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.625692294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de74fdb2-ff94-4431-ad49-32761da0a387 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.625747158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de74fdb2-ff94-4431-ad49-32761da0a387 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.626591525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-cont
roller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be14803
9a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e
956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container
.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.ku
bernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kube
rnetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794e
fa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6
cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289
d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6
949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de74fdb2-ff94-4431-ad49-32761da0a387 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.667057314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ffe7d65-c3d9-4bfd-8213-7f8c630b0a47 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.667128985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ffe7d65-c3d9-4bfd-8213-7f8c630b0a47 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.668581352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be111998-2979-41c4-91a4-73a38d0e19f7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.672099407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294022672070034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be111998-2979-41c4-91a4-73a38d0e19f7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.672850421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cdc9d3c-c82a-4776-99a4-bb5cf66144f9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.672922370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cdc9d3c-c82a-4776-99a4-bb5cf66144f9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:33:42 addons-301682 crio[849]: time="2025-06-30 14:33:42.673619007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-cont
roller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be14803
9a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e
956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container
.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.ku
bernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kube
rnetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794e
fa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6
cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289
d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6
949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cdc9d3c-c82a-4776-99a4-bb5cf66144f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	ccb1fec83c55c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          8 minutes ago       Running             busybox                                  0                   744d3a8558a51       busybox
	f4356fb8a203d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          8 minutes ago       Running             csi-snapshotter                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	505ec6a97e3e1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          8 minutes ago       Running             csi-provisioner                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	0e8810b68e820       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            10 minutes ago      Running             liveness-probe                           0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	977ef3af77456       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           10 minutes ago      Running             hostpath                                 0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	12db79e5b741e       registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15                             10 minutes ago      Running             controller                               0                   e27c33843e336       ingress-nginx-controller-67687b59dd-hqql8
	5dfe9d02b1b1a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                11 minutes ago      Running             node-driver-registrar                    0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	470ef449849e9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      12 minutes ago      Running             volume-snapshot-controller               0                   4736a1c095805       snapshot-controller-68b874b76f-m97pd
	90d1724e2a8e9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   12 minutes ago      Running             csi-external-health-monitor-controller   0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	089511c925cdb       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      12 minutes ago      Running             volume-snapshot-controller               0                   901b27bd18ec3       snapshot-controller-68b874b76f-zvnk2
	c2e8c85ce8151       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              12 minutes ago      Running             csi-resizer                              0                   754958dc28d19       csi-hostpath-resizer-0
	ba49554ce7e85       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             12 minutes ago      Running             csi-attacher                             0                   ef302c090f9a8       csi-hostpath-attacher-0
	78d53c20b85a8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf                   13 minutes ago      Exited              patch                                    0                   4e975a881fa17       ingress-nginx-admission-patch-9xc5z
	8394bba22fffd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf                   13 minutes ago      Exited              create                                   0                   7cdcf7a057d5a       ingress-nginx-admission-create-fnqjq
	87b37034569df       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              13 minutes ago      Running             registry-proxy                           0                   ab80df45e204e       registry-proxy-2dgr9
	aca5b14e1bc43       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             13 minutes ago      Running             minikube-ingress-dns                     0                   7f285ffa7ac9c       kube-ingress-dns-minikube
	70d635c9d667c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     14 minutes ago      Running             amd-gpu-device-plugin                    0                   3d37e16d91d2b       amd-gpu-device-plugin-g5z6w
	f3766ac202b89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             14 minutes ago      Running             storage-provisioner                      0                   97a7ca87e0fdb       storage-provisioner
	5aadabb8b1bfc       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                                                             14 minutes ago      Running             coredns                                  0                   78956e77203cb       coredns-674b8bbfcf-gcxhf
	f10061ba824c0       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                                                             14 minutes ago      Running             kube-proxy                               0                   b60868a950e81       kube-proxy-cm28f
	ccc99095a0e73       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e                                                                             14 minutes ago      Running             kube-apiserver                           0                   3b49e7f986574       kube-apiserver-addons-301682
	b4d0fe15b4640       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                                                             14 minutes ago      Running             kube-controller-manager                  0                   793d3507bd395       kube-controller-manager-addons-301682
	a117b554832ef       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                                                             14 minutes ago      Running             etcd                                     0                   ecf8d198683c7       etcd-addons-301682
	4e556fe1e25cc       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                                                             14 minutes ago      Running             kube-scheduler                           0                   d882c0c670fce       kube-scheduler-addons-301682
	
	
	==> coredns [5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2] <==
	[INFO] 10.244.0.7:32924 - 31817 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00016541s
	[INFO] 10.244.0.7:45555 - 36352 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00018168s
	[INFO] 10.244.0.7:45555 - 34950 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000381728s
	[INFO] 10.244.0.7:45555 - 8323 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000199431s
	[INFO] 10.244.0.7:45555 - 42071 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000099567s
	[INFO] 10.244.0.7:45555 - 64128 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000254407s
	[INFO] 10.244.0.7:45555 - 8605 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000098495s
	[INFO] 10.244.0.7:45555 - 16810 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000187806s
	[INFO] 10.244.0.7:45555 - 46537 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000158169s
	[INFO] 10.244.0.7:48694 - 60790 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000236224s
	[INFO] 10.244.0.7:48694 - 22819 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000391519s
	[INFO] 10.244.0.7:48694 - 52804 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000121386s
	[INFO] 10.244.0.7:48694 - 63575 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000214672s
	[INFO] 10.244.0.7:48694 - 48977 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000158346s
	[INFO] 10.244.0.7:48694 - 31048 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000542668s
	[INFO] 10.244.0.7:48694 - 24 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000182479s
	[INFO] 10.244.0.7:48694 - 60981 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000213382s
	[INFO] 10.244.0.7:36501 - 54224 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000166912s
	[INFO] 10.244.0.7:36501 - 51594 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00010801s
	[INFO] 10.244.0.7:36501 - 1882 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000079375s
	[INFO] 10.244.0.7:36501 - 37113 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000092875s
	[INFO] 10.244.0.7:36501 - 58055 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000080768s
	[INFO] 10.244.0.7:36501 - 42927 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.0000662s
	[INFO] 10.244.0.7:36501 - 34210 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000075688s
	[INFO] 10.244.0.7:36501 - 19748 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00008178s
	
	
	==> describe nodes <==
	Name:               addons-301682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-301682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=addons-301682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-301682
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-301682"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-301682
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:33:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:32:21 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:32:21 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:32:21 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:32:21 +0000   Mon, 30 Jun 2025 14:19:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    addons-301682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3f7748b45e54c5d95a766f7ac118097
	  System UUID:                c3f7748b-45e5-4c5d-95a7-66f7ac118097
	  Boot ID:                    4dcad91c-eb4d-46c9-ae52-10be6c00fd59
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  ingress-nginx               ingress-nginx-controller-67687b59dd-hqql8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         14m
	  kube-system                 amd-gpu-device-plugin-g5z6w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-674b8bbfcf-gcxhf                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-h4qg2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-addons-301682                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-301682                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-301682        200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-cm28f                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-301682                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 registry-694bd45846-x8cnn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 registry-proxy-2dgr9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-68b874b76f-m97pd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-68b874b76f-zvnk2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m                kubelet          Node addons-301682 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node addons-301682 event: Registered Node addons-301682 in Controller
	
	
	==> dmesg <==
	[  +3.981178] kauditd_printk_skb: 99 callbacks suppressed
	[ +14.133007] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.888041] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 14:20] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.101498] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:21] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.564016] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000063] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.018820] kauditd_printk_skb: 4 callbacks suppressed
	[Jun30 14:22] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.468740] kauditd_printk_skb: 33 callbacks suppressed
	[Jun30 14:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.720029] kauditd_printk_skb: 37 callbacks suppressed
	[Jun30 14:25] kauditd_printk_skb: 33 callbacks suppressed
	[  +3.578772] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.590938] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.177192] kauditd_printk_skb: 20 callbacks suppressed
	[Jun30 14:26] kauditd_printk_skb: 4 callbacks suppressed
	[ +46.460054] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:27] kauditd_printk_skb: 2 callbacks suppressed
	[ +35.275184] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:29] kauditd_printk_skb: 9 callbacks suppressed
	[ +22.041327] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:30] kauditd_printk_skb: 2 callbacks suppressed
	[Jun30 14:31] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb] <==
	{"level":"info","ts":"2025-06-30T14:21:16.254726Z","caller":"traceutil/trace.go:171","msg":"trace[347540210] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"200.343691ms","start":"2025-06-30T14:21:16.054373Z","end":"2025-06-30T14:21:16.254716Z","steps":["trace[347540210] 'agreement among raft nodes before linearized reading'  (duration: 200.191188ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:21:16.254998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.889254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:21:16.255051Z","caller":"traceutil/trace.go:171","msg":"trace[2072353184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"259.964064ms","start":"2025-06-30T14:21:15.995079Z","end":"2025-06-30T14:21:16.255043Z","steps":["trace[2072353184] 'agreement among raft nodes before linearized reading'  (duration: 259.892612ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:21:16.256094Z","caller":"traceutil/trace.go:171","msg":"trace[752785918] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"419.629539ms","start":"2025-06-30T14:21:15.836340Z","end":"2025-06-30T14:21:16.255969Z","steps":["trace[752785918] 'process raft request'  (duration: 416.770167ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:21:16.256259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:21:15.836292Z","time spent":"419.882706ms","remote":"127.0.0.1:55816","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1189 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-06-30T14:22:57.074171Z","caller":"traceutil/trace.go:171","msg":"trace[97580462] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"235.032412ms","start":"2025-06-30T14:22:56.839110Z","end":"2025-06-30T14:22:57.074143Z","steps":["trace[97580462] 'process raft request'  (duration: 234.613297ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.462692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650406Z","caller":"traceutil/trace.go:171","msg":"trace[1036457483] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"155.081366ms","start":"2025-06-30T14:22:59.495275Z","end":"2025-06-30T14:22:59.650356Z","steps":["trace[1036457483] 'range keys from in-memory index tree'  (duration: 154.411147ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:22:59.650586Z","caller":"traceutil/trace.go:171","msg":"trace[806257844] transaction","detail":"{read_only:false; response_revision:1386; number_of_response:1; }","duration":"115.895314ms","start":"2025-06-30T14:22:59.534680Z","end":"2025-06-30T14:22:59.650576Z","steps":["trace[806257844] 'process raft request'  (duration: 113.707335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"485.393683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650888Z","caller":"traceutil/trace.go:171","msg":"trace[707366630] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1385; }","duration":"486.585604ms","start":"2025-06-30T14:22:59.164295Z","end":"2025-06-30T14:22:59.650881Z","steps":["trace[707366630] 'range keys from in-memory index tree'  (duration: 485.334873ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.650922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.164282Z","time spent":"486.621786ms","remote":"127.0.0.1:55612","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"374.09899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651010Z","caller":"traceutil/trace.go:171","msg":"trace[926388769] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"375.285797ms","start":"2025-06-30T14:22:59.275719Z","end":"2025-06-30T14:22:59.651005Z","steps":["trace[926388769] 'range keys from in-memory index tree'  (duration: 374.055569ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.275706Z","time spent":"375.316283ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.573265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651095Z","caller":"traceutil/trace.go:171","msg":"trace[444156936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"374.826279ms","start":"2025-06-30T14:22:59.276264Z","end":"2025-06-30T14:22:59.651090Z","steps":["trace[444156936] 'range keys from in-memory index tree'  (duration: 373.54342ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.276255Z","time spent":"374.850773ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.221471ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651162Z","caller":"traceutil/trace.go:171","msg":"trace[72079455] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1385; }","duration":"136.411789ms","start":"2025-06-30T14:22:59.514744Z","end":"2025-06-30T14:22:59.651156Z","steps":["trace[72079455] 'range keys from in-memory index tree'  (duration: 135.196228ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:25:50.156282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.241875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-06-30T14:25:50.156408Z","caller":"traceutil/trace.go:171","msg":"trace[1656189336] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1889; }","duration":"105.429353ms","start":"2025-06-30T14:25:50.050958Z","end":"2025-06-30T14:25:50.156387Z","steps":["trace[1656189336] 'range keys from in-memory index tree'  (duration: 105.167742ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:28:59.297152Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1538}
	{"level":"info","ts":"2025-06-30T14:28:59.333481Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1538,"took":"35.184312ms","hash":3459685430,"current-db-size-bytes":7704576,"current-db-size":"7.7 MB","current-db-size-in-use-bytes":4759552,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2025-06-30T14:28:59.333691Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3459685430,"revision":1538,"compact-revision":-1}
	
	
	==> kernel <==
	 14:33:43 up 15 min,  0 users,  load average: 0.16, 0.42, 0.50
	Linux addons-301682 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a] <==
	I0630 14:20:17.020266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0630 14:20:17.020272       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0630 14:20:30.566598       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.249.255:443: connect: connection refused" logger="UnhandledError"
	W0630 14:20:30.568692       1 handler_proxy.go:99] no RequestInfo found in the context
	E0630 14:20:30.568788       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0630 14:20:30.592794       1 handler.go:288] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0630 14:20:30.602722       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0630 14:25:32.039384       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43658: use of closed network connection
	E0630 14:25:32.235328       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43690: use of closed network connection
	I0630 14:25:35.327796       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:40.911437       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0630 14:25:41.137079       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.71.181"}
	I0630 14:25:41.142822       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:41.721263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.215.125"}
	I0630 14:25:47.346218       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:26:31.606219       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0630 14:27:03.338971       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:27:51.135976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:00.946999       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:59.400677       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:31:51.314031       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0630 14:31:52.350724       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0] <==
	E0630 14:28:34.621606       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:35.919826       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:38.493394       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:43.650839       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:53.905832       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:29:12.559067       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	I0630 14:29:58.729598       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E0630 14:31:52.352989       1 reflector.go:200] "Failed to watch" err="the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:31:53.610998       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:31:55.274424       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:32:00.912894       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0630 14:32:01.568174       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0630 14:32:08.180369       1 shared_informer.go:350] "Waiting for caches to sync" controller="resource quota"
	I0630 14:32:08.181192       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:32:08.623346       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0630 14:32:08.623448       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0630 14:32:12.901188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:32:22.822130       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:32:30.243428       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:32:37.822383       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:32:52.823284       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:33:03.875501       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:33:07.823610       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:33:22.824630       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:33:37.825423       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b] <==
	E0630 14:19:09.616075       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:19:09.628197       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0630 14:19:09.628280       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:19:09.728584       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:19:09.728641       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:19:09.728663       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:19:09.760004       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:19:09.760419       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:19:09.760431       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:19:09.761800       1 config.go:199] "Starting service config controller"
	I0630 14:19:09.761820       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:19:09.764743       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:19:09.764796       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:19:09.764830       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:19:09.764834       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:19:09.770113       1 config.go:329] "Starting node config controller"
	I0630 14:19:09.770142       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:19:09.862889       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:19:09.865227       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:19:09.865265       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:19:09.870697       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627] <==
	E0630 14:19:00.996185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:19:00.996326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:19:00.996316       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:00.996403       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:00.996618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:00.998826       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:00.999006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:01.002700       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.002834       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:01.865362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:01.884714       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.908759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:01.937379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:19:01.938367       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:01.983087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:19:02.032891       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:19:02.058487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:19:02.131893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:02.191157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:02.310584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:02.326588       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:02.381605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0630 14:19:04.769814       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:32:55 addons-301682 kubelet[1543]: I0630 14:32:55.702250    1543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a34f42b-1eef-466f-b532-3d9e38465b9f" path="/var/lib/kubelet/pods/7a34f42b-1eef-466f-b532-3d9e38465b9f/volumes"
	Jun 30 14:33:01 addons-301682 kubelet[1543]: E0630 14:33:01.699894    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a7647f82-c5fc-422d-8b99-fe25edb95f59"
	Jun 30 14:33:04 addons-301682 kubelet[1543]: E0630 14:33:04.161228    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293984160928978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:33:04 addons-301682 kubelet[1543]: E0630 14:33:04.161273    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293984160928978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:33:04 addons-301682 kubelet[1543]: I0630 14:33:04.696028    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-x8cnn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:33:04 addons-301682 kubelet[1543]: E0630 14:33:04.697616    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:33:14 addons-301682 kubelet[1543]: E0630 14:33:14.164126    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293994163736824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:33:14 addons-301682 kubelet[1543]: E0630 14:33:14.164172    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293994163736824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:33:16 addons-301682 kubelet[1543]: E0630 14:33:16.700620    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a7647f82-c5fc-422d-8b99-fe25edb95f59"
	Jun 30 14:33:19 addons-301682 kubelet[1543]: I0630 14:33:19.697007    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-x8cnn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:33:19 addons-301682 kubelet[1543]: E0630 14:33:19.698142    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:33:21 addons-301682 kubelet[1543]: I0630 14:33:21.695742    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:33:24 addons-301682 kubelet[1543]: E0630 14:33:24.168349    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294004167600795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:33:24 addons-301682 kubelet[1543]: E0630 14:33:24.168392    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294004167600795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:33:30 addons-301682 kubelet[1543]: I0630 14:33:30.695706    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-g5z6w" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:33:30 addons-301682 kubelet[1543]: E0630 14:33:30.697384    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a7647f82-c5fc-422d-8b99-fe25edb95f59"
	Jun 30 14:33:31 addons-301682 kubelet[1543]: I0630 14:33:31.702854    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-x8cnn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:33:31 addons-301682 kubelet[1543]: E0630 14:33:31.705141    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:33:31 addons-301682 kubelet[1543]: E0630 14:33:31.872836    1543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:33:31 addons-301682 kubelet[1543]: E0630 14:33:31.872945    1543 kuberuntime_image.go:42] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:33:31 addons-301682 kubelet[1543]: E0630 14:33:31.873236    1543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcnmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(32226795-7a22-4935-b60c-8553d2716e86): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:33:31 addons-301682 kubelet[1543]: E0630 14:33:31.875045    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="32226795-7a22-4935-b60c-8553d2716e86"
	Jun 30 14:33:34 addons-301682 kubelet[1543]: E0630 14:33:34.171142    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294014170298745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:33:34 addons-301682 kubelet[1543]: E0630 14:33:34.171483    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294014170298745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:33:39 addons-301682 kubelet[1543]: I0630 14:33:39.697710    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2dgr9" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4] <==
	W0630 14:33:18.549582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:20.553639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:20.559484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:22.563301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:22.569366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:24.572479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:24.579772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:26.583931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:26.592416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:28.595278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:28.602808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:30.606462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:30.613694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:32.617251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:32.622723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:34.626216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:34.633597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:36.637364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:36.642469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:38.645482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:38.651140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:40.654951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:40.660419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:42.663066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:42.671071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301682 -n addons-301682
helpers_test.go:261: (dbg) Run:  kubectl --context addons-301682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn: exit status 1 (89.887211ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301682/192.168.39.227
	Start Time:       Mon, 30 Jun 2025 14:25:41 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9gdz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9gdz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m3s                  default-scheduler  Successfully assigned default/nginx to addons-301682
	  Warning  Failed     6m54s                 kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m55s (x4 over 8m3s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     83s (x4 over 6m54s)   kubelet            Error: ErrImagePull
	  Warning  Failed     83s (x3 over 5m45s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    14s (x9 over 6m53s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     14s (x9 over 6m53s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301682/192.168.39.227
	Start Time:       Mon, 30 Jun 2025 14:30:11 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcnmb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-jcnmb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  3m33s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-301682
	  Normal   BackOff    2m25s                  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m25s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m13s (x2 over 3m32s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     13s (x2 over 2m26s)    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     13s (x2 over 2m26s)    kubelet            Error: ErrImagePull
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6l844 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-6l844:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fnqjq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9xc5z" not found
	Error from server (NotFound): pods "registry-694bd45846-x8cnn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 addons disable ingress-dns --alsologtostderr -v=1: (1.191774759s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 addons disable ingress --alsologtostderr -v=1: (7.772338484s)
--- FAIL: TestAddons/parallel/Ingress (492.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (376.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0630 14:29:59.473052 1557732 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0630 14:29:59.479685 1557732 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0630 14:29:59.479729 1557732 kapi.go:107] duration metric: took 6.702717ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.718946ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-301682 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-301682 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [32226795-7a22-4935-b60c-8553d2716e86] Pending
helpers_test.go:344: "task-pv-pod" [32226795-7a22-4935-b60c-8553d2716e86] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301682 -n addons-301682
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-06-30 14:36:12.096927094 +0000 UTC m=+1115.184807896
addons_test.go:567: (dbg) Run:  kubectl --context addons-301682 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-301682 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-301682/192.168.39.227
Start Time:       Mon, 30 Jun 2025 14:30:11 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcnmb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-jcnmb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m1s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-301682
Warning  Failed     2m41s (x2 over 4m54s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     84s (x3 over 4m54s)    kubelet            Error: ErrImagePull
Warning  Failed     84s                    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    46s (x5 over 4m53s)    kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     46s (x5 over 4m53s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    32s (x4 over 6m)       kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-301682 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-301682 logs task-pv-pod -n default: exit status 1 (76.245335ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-301682 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-301682 -n addons-301682
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 logs -n 25: (1.544145377s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | -p download-only-781147              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | --download-only -p                   | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | binary-mirror-095233                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44619               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-095233              | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| addons  | disable dashboard -p                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| start   | -p addons-301682 --wait=true         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:25 UTC |
	|         | --memory=4096 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=registry-creds              |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | -p addons-301682                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:29 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | configure registry-creds -f          | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | ./testdata/addons_testconfig.json    |                      |         |         |                     |                     |
	|         | -p addons-301682                     |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | disable registry-creds               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:32 UTC | 30 Jun 25 14:33 UTC |
	|         | storage-provisioner-rancher          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:33 UTC | 30 Jun 25 14:33 UTC |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:33 UTC | 30 Jun 25 14:33 UTC |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:18:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:18:18.914659 1558425 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:18:18.914940 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.914950 1558425 out.go:358] Setting ErrFile to fd 2...
	I0630 14:18:18.914954 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.915163 1558425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:18:18.915795 1558425 out.go:352] Setting JSON to false
	I0630 14:18:18.916730 1558425 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":28791,"bootTime":1751264308,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:18:18.916865 1558425 start.go:140] virtualization: kvm guest
	I0630 14:18:18.918804 1558425 out.go:177] * [addons-301682] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:18:18.920591 1558425 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:18:18.920596 1558425 notify.go:220] Checking for updates...
	I0630 14:18:18.923430 1558425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:18:18.924993 1558425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:18:18.926449 1558425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:18.927916 1558425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:18:18.929158 1558425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:18:18.930609 1558425 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:18:18.965828 1558425 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 14:18:18.967229 1558425 start.go:304] selected driver: kvm2
	I0630 14:18:18.967249 1558425 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:18:18.967260 1558425 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:18:18.968055 1558425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.968161 1558425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:18:18.984884 1558425 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:18:18.984967 1558425 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:18:18.985269 1558425 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:18:18.985311 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:18.985360 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:18.985373 1558425 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:18:18.985492 1558425 start.go:347] cluster config:
	{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0630 14:18:18.985616 1558425 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.987784 1558425 out.go:177] * Starting "addons-301682" primary control-plane node in "addons-301682" cluster
	I0630 14:18:18.989175 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:18.989236 1558425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 14:18:18.989252 1558425 cache.go:56] Caching tarball of preloaded images
	I0630 14:18:18.989351 1558425 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 14:18:18.989366 1558425 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 14:18:18.989808 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:18.989840 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json: {Name:mk0b97369f17da476cd2a8393ae45d3ce84c94a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:18.990016 1558425 start.go:360] acquireMachinesLock for addons-301682: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 14:18:18.990075 1558425 start.go:364] duration metric: took 40.808µs to acquireMachinesLock for "addons-301682"
	I0630 14:18:18.990091 1558425 start.go:93] Provisioning new machine with config: &{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:18:18.990156 1558425 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 14:18:18.992039 1558425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0630 14:18:18.992210 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:18:18.992268 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:18:19.009360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0630 14:18:19.009944 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:18:19.010513 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:18:19.010538 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:18:19.010965 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:18:19.011233 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:19.011437 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:19.011652 1558425 start.go:159] libmachine.API.Create for "addons-301682" (driver="kvm2")
	I0630 14:18:19.011686 1558425 client.go:168] LocalClient.Create starting
	I0630 14:18:19.011737 1558425 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 14:18:19.156936 1558425 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 14:18:19.413430 1558425 main.go:141] libmachine: Running pre-create checks...
	I0630 14:18:19.413459 1558425 main.go:141] libmachine: (addons-301682) Calling .PreCreateCheck
	I0630 14:18:19.414009 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:19.414492 1558425 main.go:141] libmachine: Creating machine...
	I0630 14:18:19.414509 1558425 main.go:141] libmachine: (addons-301682) Calling .Create
	I0630 14:18:19.414658 1558425 main.go:141] libmachine: (addons-301682) creating KVM machine...
	I0630 14:18:19.414680 1558425 main.go:141] libmachine: (addons-301682) creating network...
	I0630 14:18:19.416107 1558425 main.go:141] libmachine: (addons-301682) DBG | found existing default KVM network
	I0630 14:18:19.416967 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.416813 1558447 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0630 14:18:19.417027 1558425 main.go:141] libmachine: (addons-301682) DBG | created network xml: 
	I0630 14:18:19.417047 1558425 main.go:141] libmachine: (addons-301682) DBG | <network>
	I0630 14:18:19.417058 1558425 main.go:141] libmachine: (addons-301682) DBG |   <name>mk-addons-301682</name>
	I0630 14:18:19.417065 1558425 main.go:141] libmachine: (addons-301682) DBG |   <dns enable='no'/>
	I0630 14:18:19.417074 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417083 1558425 main.go:141] libmachine: (addons-301682) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 14:18:19.417095 1558425 main.go:141] libmachine: (addons-301682) DBG |     <dhcp>
	I0630 14:18:19.417105 1558425 main.go:141] libmachine: (addons-301682) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 14:18:19.417114 1558425 main.go:141] libmachine: (addons-301682) DBG |     </dhcp>
	I0630 14:18:19.417134 1558425 main.go:141] libmachine: (addons-301682) DBG |   </ip>
	I0630 14:18:19.417161 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417196 1558425 main.go:141] libmachine: (addons-301682) DBG | </network>
	I0630 14:18:19.417211 1558425 main.go:141] libmachine: (addons-301682) DBG | 
	I0630 14:18:19.422966 1558425 main.go:141] libmachine: (addons-301682) DBG | trying to create private KVM network mk-addons-301682 192.168.39.0/24...
	I0630 14:18:19.504039 1558425 main.go:141] libmachine: (addons-301682) DBG | private KVM network mk-addons-301682 192.168.39.0/24 created
	I0630 14:18:19.504091 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.503994 1558447 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.504105 1558425 main.go:141] libmachine: (addons-301682) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.504121 1558425 main.go:141] libmachine: (addons-301682) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:18:19.504170 1558425 main.go:141] libmachine: (addons-301682) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 14:18:19.852642 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.852518 1558447 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa...
	I0630 14:18:19.994685 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994513 1558447 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk...
	I0630 14:18:19.994718 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing magic tar header
	I0630 14:18:19.994732 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing SSH key tar header
	I0630 14:18:19.994739 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994653 1558447 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.994842 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682
	I0630 14:18:19.994876 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 14:18:19.994890 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 (perms=drwx------)
	I0630 14:18:19.994904 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 14:18:19.994914 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 14:18:19.994928 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 14:18:19.994937 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 14:18:19.994950 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 14:18:19.994964 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.994974 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:19.994989 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 14:18:19.994999 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 14:18:19.995008 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins
	I0630 14:18:19.995017 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home
	I0630 14:18:19.995028 1558425 main.go:141] libmachine: (addons-301682) DBG | skipping /home - not owner
	I0630 14:18:19.996388 1558425 main.go:141] libmachine: (addons-301682) define libvirt domain using xml: 
	I0630 14:18:19.996417 1558425 main.go:141] libmachine: (addons-301682) <domain type='kvm'>
	I0630 14:18:19.996424 1558425 main.go:141] libmachine: (addons-301682)   <name>addons-301682</name>
	I0630 14:18:19.996429 1558425 main.go:141] libmachine: (addons-301682)   <memory unit='MiB'>4096</memory>
	I0630 14:18:19.996434 1558425 main.go:141] libmachine: (addons-301682)   <vcpu>2</vcpu>
	I0630 14:18:19.996437 1558425 main.go:141] libmachine: (addons-301682)   <features>
	I0630 14:18:19.996441 1558425 main.go:141] libmachine: (addons-301682)     <acpi/>
	I0630 14:18:19.996445 1558425 main.go:141] libmachine: (addons-301682)     <apic/>
	I0630 14:18:19.996450 1558425 main.go:141] libmachine: (addons-301682)     <pae/>
	I0630 14:18:19.996454 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996496 1558425 main.go:141] libmachine: (addons-301682)   </features>
	I0630 14:18:19.996523 1558425 main.go:141] libmachine: (addons-301682)   <cpu mode='host-passthrough'>
	I0630 14:18:19.996559 1558425 main.go:141] libmachine: (addons-301682)   
	I0630 14:18:19.996579 1558425 main.go:141] libmachine: (addons-301682)   </cpu>
	I0630 14:18:19.996596 1558425 main.go:141] libmachine: (addons-301682)   <os>
	I0630 14:18:19.996607 1558425 main.go:141] libmachine: (addons-301682)     <type>hvm</type>
	I0630 14:18:19.996615 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='cdrom'/>
	I0630 14:18:19.996623 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='hd'/>
	I0630 14:18:19.996628 1558425 main.go:141] libmachine: (addons-301682)     <bootmenu enable='no'/>
	I0630 14:18:19.996634 1558425 main.go:141] libmachine: (addons-301682)   </os>
	I0630 14:18:19.996639 1558425 main.go:141] libmachine: (addons-301682)   <devices>
	I0630 14:18:19.996646 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='cdrom'>
	I0630 14:18:19.996654 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/boot2docker.iso'/>
	I0630 14:18:19.996661 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hdc' bus='scsi'/>
	I0630 14:18:19.996666 1558425 main.go:141] libmachine: (addons-301682)       <readonly/>
	I0630 14:18:19.996672 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996677 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='disk'>
	I0630 14:18:19.996687 1558425 main.go:141] libmachine: (addons-301682)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 14:18:19.996710 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk'/>
	I0630 14:18:19.996729 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hda' bus='virtio'/>
	I0630 14:18:19.996742 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996753 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996766 1558425 main.go:141] libmachine: (addons-301682)       <source network='mk-addons-301682'/>
	I0630 14:18:19.996777 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996786 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996796 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996808 1558425 main.go:141] libmachine: (addons-301682)       <source network='default'/>
	I0630 14:18:19.996821 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996847 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996868 1558425 main.go:141] libmachine: (addons-301682)     <serial type='pty'>
	I0630 14:18:19.996884 1558425 main.go:141] libmachine: (addons-301682)       <target port='0'/>
	I0630 14:18:19.996899 1558425 main.go:141] libmachine: (addons-301682)     </serial>
	I0630 14:18:19.996909 1558425 main.go:141] libmachine: (addons-301682)     <console type='pty'>
	I0630 14:18:19.996918 1558425 main.go:141] libmachine: (addons-301682)       <target type='serial' port='0'/>
	I0630 14:18:19.996928 1558425 main.go:141] libmachine: (addons-301682)     </console>
	I0630 14:18:19.996938 1558425 main.go:141] libmachine: (addons-301682)     <rng model='virtio'>
	I0630 14:18:19.996951 1558425 main.go:141] libmachine: (addons-301682)       <backend model='random'>/dev/random</backend>
	I0630 14:18:19.996962 1558425 main.go:141] libmachine: (addons-301682)     </rng>
	I0630 14:18:19.996969 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996980 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996990 1558425 main.go:141] libmachine: (addons-301682)   </devices>
	I0630 14:18:19.997056 1558425 main.go:141] libmachine: (addons-301682) </domain>
	I0630 14:18:19.997083 1558425 main.go:141] libmachine: (addons-301682) 
	I0630 14:18:20.002436 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:4a:da:84 in network default
	I0630 14:18:20.002966 1558425 main.go:141] libmachine: (addons-301682) starting domain...
	I0630 14:18:20.002981 1558425 main.go:141] libmachine: (addons-301682) ensuring networks are active...
	I0630 14:18:20.002988 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:20.003928 1558425 main.go:141] libmachine: (addons-301682) Ensuring network default is active
	I0630 14:18:20.004377 1558425 main.go:141] libmachine: (addons-301682) Ensuring network mk-addons-301682 is active
	I0630 14:18:20.004924 1558425 main.go:141] libmachine: (addons-301682) getting domain XML...
	I0630 14:18:20.006331 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:21.490289 1558425 main.go:141] libmachine: (addons-301682) waiting for IP...
	I0630 14:18:21.491154 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.491628 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.491677 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.491627 1558447 retry.go:31] will retry after 227.981696ms: waiting for domain to come up
	I0630 14:18:21.721263 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.721780 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.721803 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.721737 1558447 retry.go:31] will retry after 379.046975ms: waiting for domain to come up
	I0630 14:18:22.102468 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.102921 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.102946 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.102870 1558447 retry.go:31] will retry after 342.349164ms: waiting for domain to come up
	I0630 14:18:22.446573 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.446984 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.447028 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.446972 1558447 retry.go:31] will retry after 471.24813ms: waiting for domain to come up
	I0630 14:18:22.920211 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.920789 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.920882 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.920792 1558447 retry.go:31] will retry after 708.674729ms: waiting for domain to come up
	I0630 14:18:23.631552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:23.632140 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:23.632158 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:23.632083 1558447 retry.go:31] will retry after 832.667186ms: waiting for domain to come up
	I0630 14:18:24.466597 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:24.467128 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:24.467188 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:24.467084 1558447 retry.go:31] will retry after 1.046318752s: waiting for domain to come up
	I0630 14:18:25.514952 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:25.515439 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:25.515467 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:25.515417 1558447 retry.go:31] will retry after 1.194063503s: waiting for domain to come up
	I0630 14:18:26.712109 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:26.712668 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:26.712736 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:26.712627 1558447 retry.go:31] will retry after 1.248422127s: waiting for domain to come up
	I0630 14:18:27.962423 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:27.962871 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:27.962904 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:27.962823 1558447 retry.go:31] will retry after 2.035519816s: waiting for domain to come up
	I0630 14:18:29.999626 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:30.000023 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:30.000122 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:30.000029 1558447 retry.go:31] will retry after 2.163487066s: waiting for domain to come up
	I0630 14:18:32.164834 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:32.165260 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:32.165289 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:32.165193 1558447 retry.go:31] will retry after 2.715279658s: waiting for domain to come up
	I0630 14:18:34.882095 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:34.882613 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:34.882651 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:34.882566 1558447 retry.go:31] will retry after 4.101409574s: waiting for domain to come up
	I0630 14:18:38.986670 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:38.987057 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:38.987115 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:38.987021 1558447 retry.go:31] will retry after 4.770477957s: waiting for domain to come up
	I0630 14:18:43.763775 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764289 1558425 main.go:141] libmachine: (addons-301682) found domain IP: 192.168.39.227
	I0630 14:18:43.764317 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has current primary IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764323 1558425 main.go:141] libmachine: (addons-301682) reserving static IP address...
	I0630 14:18:43.764708 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find host DHCP lease matching {name: "addons-301682", mac: "52:54:00:83:16:36", ip: "192.168.39.227"} in network mk-addons-301682
	I0630 14:18:43.852639 1558425 main.go:141] libmachine: (addons-301682) reserved static IP address 192.168.39.227 for domain addons-301682
	I0630 14:18:43.852672 1558425 main.go:141] libmachine: (addons-301682) DBG | Getting to WaitForSSH function...
	I0630 14:18:43.852679 1558425 main.go:141] libmachine: (addons-301682) waiting for SSH...
	I0630 14:18:43.855466 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855863 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.855913 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855970 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH client type: external
	I0630 14:18:43.856034 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa (-rw-------)
	I0630 14:18:43.856089 1558425 main.go:141] libmachine: (addons-301682) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 14:18:43.856119 1558425 main.go:141] libmachine: (addons-301682) DBG | About to run SSH command:
	I0630 14:18:43.856137 1558425 main.go:141] libmachine: (addons-301682) DBG | exit 0
	I0630 14:18:43.981627 1558425 main.go:141] libmachine: (addons-301682) DBG | SSH cmd err, output: <nil>: 
	I0630 14:18:43.981928 1558425 main.go:141] libmachine: (addons-301682) KVM machine creation complete
	I0630 14:18:43.982338 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:43.982966 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983226 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983462 1558425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 14:18:43.983477 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:18:43.984862 1558425 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 14:18:43.984878 1558425 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 14:18:43.984885 1558425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 14:18:43.984892 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:43.987532 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.987932 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.987959 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.988068 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:43.988288 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988434 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988572 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:43.988711 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:43.988940 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:43.988950 1558425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 14:18:44.093060 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.093094 1558425 main.go:141] libmachine: Detecting the provisioner...
	I0630 14:18:44.093103 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.096339 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096697 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.096721 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096934 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.097182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097449 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097610 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.097843 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.098060 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.098080 1558425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 14:18:44.202824 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 14:18:44.202946 1558425 main.go:141] libmachine: found compatible host: buildroot
	I0630 14:18:44.202959 1558425 main.go:141] libmachine: Provisioning with buildroot...
	I0630 14:18:44.202967 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203257 1558425 buildroot.go:166] provisioning hostname "addons-301682"
	I0630 14:18:44.203283 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.206655 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.206965 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.206989 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.207261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.207476 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207654 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207765 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.207928 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.208172 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.208189 1558425 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-301682 && echo "addons-301682" | sudo tee /etc/hostname
	I0630 14:18:44.326076 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-301682
	
	I0630 14:18:44.326120 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.329781 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330236 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.330271 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330493 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.330780 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331000 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331147 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.331319 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.331561 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.331583 1558425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-301682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-301682/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-301682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 14:18:44.442815 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.442853 1558425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 14:18:44.442872 1558425 buildroot.go:174] setting up certificates
	I0630 14:18:44.442886 1558425 provision.go:84] configureAuth start
	I0630 14:18:44.442963 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.443427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:44.446591 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447120 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.447146 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447411 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.449967 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450292 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.450314 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450474 1558425 provision.go:143] copyHostCerts
	I0630 14:18:44.450577 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 14:18:44.450730 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 14:18:44.450832 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 14:18:44.450922 1558425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.addons-301682 san=[127.0.0.1 192.168.39.227 addons-301682 localhost minikube]
	I0630 14:18:44.669777 1558425 provision.go:177] copyRemoteCerts
	I0630 14:18:44.669866 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 14:18:44.669906 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.673124 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673495 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.673530 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673760 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.674080 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.674291 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.674517 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:44.758379 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 14:18:44.788885 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 14:18:44.817666 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 14:18:44.847039 1558425 provision.go:87] duration metric: took 404.122435ms to configureAuth
	I0630 14:18:44.847076 1558425 buildroot.go:189] setting minikube options for container-runtime
	I0630 14:18:44.847582 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:18:44.847720 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.850359 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.850971 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.850998 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.851240 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.851500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851706 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851871 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.852084 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.852306 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.852322 1558425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 14:18:45.094141 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 14:18:45.094172 1558425 main.go:141] libmachine: Checking connection to Docker...
	I0630 14:18:45.094182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetURL
	I0630 14:18:45.095525 1558425 main.go:141] libmachine: (addons-301682) DBG | using libvirt version 6000000
	I0630 14:18:45.097995 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098457 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.098484 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098973 1558425 main.go:141] libmachine: Docker is up and running!
	I0630 14:18:45.098988 1558425 main.go:141] libmachine: Reticulating splines...
	I0630 14:18:45.098996 1558425 client.go:171] duration metric: took 26.087298039s to LocalClient.Create
	I0630 14:18:45.099029 1558425 start.go:167] duration metric: took 26.087375233s to libmachine.API.Create "addons-301682"
	I0630 14:18:45.099043 1558425 start.go:293] postStartSetup for "addons-301682" (driver="kvm2")
	I0630 14:18:45.099058 1558425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 14:18:45.099080 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.099385 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 14:18:45.099417 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.103070 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103476 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.103519 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.103974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.104154 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.104348 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.190062 1558425 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 14:18:45.194479 1558425 info.go:137] Remote host: Buildroot 2025.02
	I0630 14:18:45.194513 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 14:18:45.194584 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 14:18:45.194617 1558425 start.go:296] duration metric: took 95.564885ms for postStartSetup
	I0630 14:18:45.194655 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:45.195269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.198414 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.198916 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.198937 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.199225 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:45.199414 1558425 start.go:128] duration metric: took 26.209245344s to createHost
	I0630 14:18:45.199439 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.202677 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203657 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.203683 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203917 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.204167 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204389 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204594 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.204750 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:45.204952 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:45.204962 1558425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 14:18:45.310482 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751293125.283428942
	
	I0630 14:18:45.310513 1558425 fix.go:216] guest clock: 1751293125.283428942
	I0630 14:18:45.310540 1558425 fix.go:229] Guest: 2025-06-30 14:18:45.283428942 +0000 UTC Remote: 2025-06-30 14:18:45.199427216 +0000 UTC m=+26.326566099 (delta=84.001726ms)
	I0630 14:18:45.310570 1558425 fix.go:200] guest clock delta is within tolerance: 84.001726ms
	I0630 14:18:45.310578 1558425 start.go:83] releasing machines lock for "addons-301682", held for 26.320495243s
	I0630 14:18:45.310656 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.310928 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.313785 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314207 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.314241 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314506 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315123 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315340 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315461 1558425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 14:18:45.315505 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.315646 1558425 ssh_runner.go:195] Run: cat /version.json
	I0630 14:18:45.315683 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.318925 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319155 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319563 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319594 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319617 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319643 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319788 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.319877 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.320031 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320110 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320304 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320317 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320442 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.320501 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.399981 1558425 ssh_runner.go:195] Run: systemctl --version
	I0630 14:18:45.435607 1558425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 14:18:45.595593 1558425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 14:18:45.602291 1558425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 14:18:45.602374 1558425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 14:18:45.622229 1558425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 14:18:45.622263 1558425 start.go:495] detecting cgroup driver to use...
	I0630 14:18:45.622333 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 14:18:45.641226 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 14:18:45.658995 1558425 docker.go:230] disabling cri-docker service (if available) ...
	I0630 14:18:45.659074 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 14:18:45.675308 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 14:18:45.691780 1558425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 14:18:45.844773 1558425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 14:18:46.002067 1558425 docker.go:246] disabling docker service ...
	I0630 14:18:46.002163 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 14:18:46.018486 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 14:18:46.032711 1558425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 14:18:46.215507 1558425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 14:18:46.345437 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 14:18:46.361241 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 14:18:46.382182 1558425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 14:18:46.382265 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.393781 1558425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 14:18:46.393858 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.404879 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.415753 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.427101 1558425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 14:18:46.439585 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.450640 1558425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.469657 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.480995 1558425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 14:18:46.490960 1558425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 14:18:46.491038 1558425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 14:18:46.506162 1558425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 14:18:46.516885 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:46.649290 1558425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 14:18:46.754804 1558425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 14:18:46.754924 1558425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 14:18:46.760277 1558425 start.go:563] Will wait 60s for crictl version
	I0630 14:18:46.760374 1558425 ssh_runner.go:195] Run: which crictl
	I0630 14:18:46.764622 1558425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 14:18:46.806540 1558425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 14:18:46.806668 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.835571 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.870294 1558425 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 14:18:46.871793 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:46.874897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875281 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:46.875316 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875568 1558425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 14:18:46.880040 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:46.893844 1558425 kubeadm.go:875] updating cluster {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301
682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 14:18:46.894040 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:46.894098 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:46.928051 1558425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 14:18:46.928142 1558425 ssh_runner.go:195] Run: which lz4
	I0630 14:18:46.932106 1558425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 14:18:46.936459 1558425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 14:18:46.936498 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 14:18:48.250677 1558425 crio.go:462] duration metric: took 1.318609473s to copy over tarball
	I0630 14:18:48.250794 1558425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 14:18:50.229636 1558425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978807649s)
	I0630 14:18:50.229688 1558425 crio.go:469] duration metric: took 1.978978941s to extract the tarball
	I0630 14:18:50.229696 1558425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 14:18:50.268804 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:50.313787 1558425 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 14:18:50.313824 1558425 cache_images.go:84] Images are preloaded, skipping loading
	I0630 14:18:50.313836 1558425 kubeadm.go:926] updating node { 192.168.39.227 8443 v1.33.2 crio true true} ...
	I0630 14:18:50.313984 1558425 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-301682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 14:18:50.314108 1558425 ssh_runner.go:195] Run: crio config
	I0630 14:18:50.358762 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:50.358788 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:50.358799 1558425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 14:18:50.358821 1558425 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-301682 NodeName:addons-301682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 14:18:50.358985 1558425 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-301682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 14:18:50.359075 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 14:18:50.370269 1558425 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 14:18:50.370359 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 14:18:50.381422 1558425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0630 14:18:50.402864 1558425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 14:18:50.423535 1558425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0630 14:18:50.443802 1558425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0630 14:18:50.448073 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:50.462771 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:50.610565 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:18:50.641674 1558425 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682 for IP: 192.168.39.227
	I0630 14:18:50.641703 1558425 certs.go:194] generating shared ca certs ...
	I0630 14:18:50.641726 1558425 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.641917 1558425 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 14:18:50.775973 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt ...
	I0630 14:18:50.776127 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt: {Name:mk4a7e2f23df1877aa667a5fe9d149d87fa65b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776340 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key ...
	I0630 14:18:50.776353 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key: {Name:mkfe815a12ae8eded146419f42722ed747bb8cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776428 1558425 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 14:18:51.239699 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt ...
	I0630 14:18:51.239736 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt: {Name:mk010f91985630538e2436d654ff5b4cc759ab0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.239913 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key ...
	I0630 14:18:51.239969 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key: {Name:mk7a36f8a28748533897dd07634d8a5fe44a63a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.240059 1558425 certs.go:256] generating profile certs ...
	I0630 14:18:51.240131 1558425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key
	I0630 14:18:51.240150 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt with IP's: []
	I0630 14:18:51.635887 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt ...
	I0630 14:18:51.635927 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: {Name:mk22a67b2c0e90bc5dc67c34e330ee73fa799ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636119 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key ...
	I0630 14:18:51.636131 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key: {Name:mkbf3398b6d7cd5371d9a47d76e04eca4caef4d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636203 1558425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213
	I0630 14:18:51.636222 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I0630 14:18:52.292769 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 ...
	I0630 14:18:52.292809 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213: {Name:mk1402d3ac26fc5001a4011347c3552a378bda20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.292987 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 ...
	I0630 14:18:52.293001 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213: {Name:mkeaa6e21db5ae6cfb6b65c2ca90535340da5144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.293104 1558425 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt
	I0630 14:18:52.293196 1558425 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key
	I0630 14:18:52.293250 1558425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key
	I0630 14:18:52.293270 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt with IP's: []
	I0630 14:18:52.419123 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt ...
	I0630 14:18:52.419160 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt: {Name:mk3dd33047a5c3911a43a99bfac807aefa8e06f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419432 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key ...
	I0630 14:18:52.419460 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key: {Name:mk0d0b95d0dc825fc1e604461553530ed22a222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419680 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 14:18:52.419719 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 14:18:52.419744 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 14:18:52.419768 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 14:18:52.420585 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 14:18:52.463313 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 14:18:52.499004 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 14:18:52.526030 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 14:18:52.553220 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 14:18:52.581783 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 14:18:52.609656 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 14:18:52.639333 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0630 14:18:52.668789 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 14:18:52.696673 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 14:18:52.718151 1558425 ssh_runner.go:195] Run: openssl version
	I0630 14:18:52.724602 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 14:18:52.737181 1558425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742169 1558425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742231 1558425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.749342 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 14:18:52.762744 1558425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 14:18:52.768406 1558425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 14:18:52.768474 1558425 kubeadm.go:392] StartCluster: {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:18:52.768572 1558425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 14:18:52.768641 1558425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 14:18:52.812315 1558425 cri.go:89] found id: ""
	I0630 14:18:52.812437 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 14:18:52.824357 1558425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 14:18:52.837485 1558425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 14:18:52.850688 1558425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 14:18:52.850718 1558425 kubeadm.go:157] found existing configuration files:
	
	I0630 14:18:52.850770 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 14:18:52.862272 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 14:18:52.862353 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 14:18:52.874603 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 14:18:52.885384 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 14:18:52.885470 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 14:18:52.897341 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.908726 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 14:18:52.908791 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.920093 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 14:18:52.930423 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 14:18:52.930535 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 14:18:52.943582 1558425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 14:18:53.101493 1558425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 14:19:04.329808 1558425 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 14:19:04.329898 1558425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 14:19:04.330028 1558425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 14:19:04.330246 1558425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 14:19:04.330383 1558425 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 14:19:04.330478 1558425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 14:19:04.332630 1558425 out.go:235]   - Generating certificates and keys ...
	I0630 14:19:04.332731 1558425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 14:19:04.332810 1558425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 14:19:04.332905 1558425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 14:19:04.332972 1558425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 14:19:04.333024 1558425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 14:19:04.333069 1558425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 14:19:04.333119 1558425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 14:19:04.333250 1558425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333332 1558425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 14:19:04.333509 1558425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333623 1558425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 14:19:04.333739 1558425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 14:19:04.333816 1558425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 14:19:04.333868 1558425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 14:19:04.333909 1558425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 14:19:04.333955 1558425 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 14:19:04.334001 1558425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 14:19:04.334088 1558425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 14:19:04.334155 1558425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 14:19:04.334337 1558425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 14:19:04.334433 1558425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 14:19:04.336040 1558425 out.go:235]   - Booting up control plane ...
	I0630 14:19:04.336158 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 14:19:04.336225 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 14:19:04.336291 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 14:19:04.336387 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 14:19:04.336461 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 14:19:04.336498 1558425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 14:19:04.336705 1558425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 14:19:04.336826 1558425 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 14:19:04.336898 1558425 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001501258s
	I0630 14:19:04.336999 1558425 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 14:19:04.337079 1558425 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.227:8443/livez
	I0630 14:19:04.337160 1558425 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 14:19:04.337266 1558425 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 14:19:04.337343 1558425 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.200262885s
	I0630 14:19:04.337437 1558425 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.075387862s
	I0630 14:19:04.337541 1558425 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001441935s
	I0630 14:19:04.337665 1558425 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 14:19:04.337791 1558425 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 14:19:04.337843 1558425 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 14:19:04.338003 1558425 kubeadm.go:310] [mark-control-plane] Marking the node addons-301682 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 14:19:04.338066 1558425 kubeadm.go:310] [bootstrap-token] Using token: anrlv2.kitz2ouxhot5qn5d
	I0630 14:19:04.339966 1558425 out.go:235]   - Configuring RBAC rules ...
	I0630 14:19:04.340101 1558425 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 14:19:04.340226 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 14:19:04.340408 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 14:19:04.340552 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 14:19:04.340686 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 14:19:04.340806 1558425 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 14:19:04.340905 1558425 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 14:19:04.340944 1558425 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 14:19:04.340984 1558425 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 14:19:04.340990 1558425 kubeadm.go:310] 
	I0630 14:19:04.341040 1558425 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 14:19:04.341045 1558425 kubeadm.go:310] 
	I0630 14:19:04.341135 1558425 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 14:19:04.341142 1558425 kubeadm.go:310] 
	I0630 14:19:04.341172 1558425 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 14:19:04.341223 1558425 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 14:19:04.341270 1558425 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 14:19:04.341276 1558425 kubeadm.go:310] 
	I0630 14:19:04.341322 1558425 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 14:19:04.341328 1558425 kubeadm.go:310] 
	I0630 14:19:04.341449 1558425 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 14:19:04.341467 1558425 kubeadm.go:310] 
	I0630 14:19:04.341541 1558425 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 14:19:04.341643 1558425 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 14:19:04.341707 1558425 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 14:19:04.341712 1558425 kubeadm.go:310] 
	I0630 14:19:04.341781 1558425 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 14:19:04.341846 1558425 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 14:19:04.341851 1558425 kubeadm.go:310] 
	I0630 14:19:04.341924 1558425 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342019 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 14:19:04.342038 1558425 kubeadm.go:310] 	--control-plane 
	I0630 14:19:04.342043 1558425 kubeadm.go:310] 
	I0630 14:19:04.342140 1558425 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 14:19:04.342157 1558425 kubeadm.go:310] 
	I0630 14:19:04.342225 1558425 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342331 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 14:19:04.342344 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:19:04.342353 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:19:04.344305 1558425 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 14:19:04.345962 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 14:19:04.358944 1558425 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 14:19:04.382550 1558425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 14:19:04.382682 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:04.382684 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-301682 minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=addons-301682 minikube.k8s.io/primary=true
	I0630 14:19:04.443025 1558425 ops.go:34] apiserver oom_adj: -16
	I0630 14:19:04.557859 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.058710 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.558655 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.058095 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.558920 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.058903 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.558782 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.058045 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.134095 1558425 kubeadm.go:1105] duration metric: took 3.751500145s to wait for elevateKubeSystemPrivileges
	I0630 14:19:08.134146 1558425 kubeadm.go:394] duration metric: took 15.365674649s to StartCluster
	I0630 14:19:08.134169 1558425 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.134310 1558425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:19:08.134819 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.135078 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 14:19:08.135086 1558425 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:19:08.135172 1558425 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0630 14:19:08.135355 1558425 addons.go:69] Setting yakd=true in profile "addons-301682"
	I0630 14:19:08.135370 1558425 addons.go:69] Setting default-storageclass=true in profile "addons-301682"
	I0630 14:19:08.135401 1558425 addons.go:69] Setting ingress=true in profile "addons-301682"
	I0630 14:19:08.135408 1558425 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-301682"
	I0630 14:19:08.135419 1558425 addons.go:69] Setting ingress-dns=true in profile "addons-301682"
	I0630 14:19:08.135425 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-301682"
	I0630 14:19:08.135433 1558425 addons.go:238] Setting addon ingress-dns=true in "addons-301682"
	I0630 14:19:08.135450 1558425 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135439 1558425 addons.go:69] Setting cloud-spanner=true in profile "addons-301682"
	I0630 14:19:08.135466 1558425 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-301682"
	I0630 14:19:08.135453 1558425 addons.go:69] Setting registry-creds=true in profile "addons-301682"
	I0630 14:19:08.135470 1558425 addons.go:238] Setting addon cloud-spanner=true in "addons-301682"
	I0630 14:19:08.135482 1558425 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-301682"
	I0630 14:19:08.135488 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135499 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-301682"
	I0630 14:19:08.135507 1558425 addons.go:238] Setting addon registry-creds=true in "addons-301682"
	I0630 14:19:08.135508 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135522 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135532 1558425 addons.go:69] Setting volcano=true in profile "addons-301682"
	I0630 14:19:08.135553 1558425 addons.go:238] Setting addon volcano=true in "addons-301682"
	I0630 14:19:08.135560 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135601 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135968 1558425 addons.go:69] Setting storage-provisioner=true in profile "addons-301682"
	I0630 14:19:08.135968 1558425 addons.go:69] Setting volumesnapshots=true in profile "addons-301682"
	I0630 14:19:08.135383 1558425 addons.go:238] Setting addon yakd=true in "addons-301682"
	I0630 14:19:08.135985 1558425 addons.go:238] Setting addon storage-provisioner=true in "addons-301682"
	I0630 14:19:08.135986 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135992 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135999 1558425 addons.go:69] Setting metrics-server=true in profile "addons-301682"
	I0630 14:19:08.136001 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135468 1558425 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:08.136013 1558425 addons.go:238] Setting addon metrics-server=true in "addons-301682"
	I0630 14:19:08.136018 1558425 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136026 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136033 1558425 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-301682"
	I0630 14:19:08.136033 1558425 addons.go:69] Setting registry=true in profile "addons-301682"
	I0630 14:19:08.136037 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136042 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136046 1558425 addons.go:238] Setting addon registry=true in "addons-301682"
	I0630 14:19:08.136053 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136053 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136063 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136333 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136344 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135988 1558425 addons.go:238] Setting addon volumesnapshots=true in "addons-301682"
	I0630 14:19:08.136373 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136380 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135392 1558425 addons.go:69] Setting gcp-auth=true in profile "addons-301682"
	I0630 14:19:08.136406 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135413 1558425 addons.go:238] Setting addon ingress=true in "addons-301682"
	I0630 14:19:08.136410 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136430 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136437 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136439 1558425 mustload.go:65] Loading cluster: addons-301682
	I0630 14:19:08.135985 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136376 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136021 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136019 1558425 addons.go:69] Setting inspektor-gadget=true in profile "addons-301682"
	I0630 14:19:08.136533 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136408 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136571 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136399 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136594 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136043 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136654 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136035 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135386 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136802 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136830 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136538 1558425 addons.go:238] Setting addon inspektor-gadget=true in "addons-301682"
	I0630 14:19:08.136860 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136968 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.137006 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.141678 1558425 out.go:177] * Verifying Kubernetes components...
	I0630 14:19:08.143558 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:19:08.149915 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.149982 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.150069 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.150111 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.153357 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.153432 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.165614 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0630 14:19:08.165858 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0630 14:19:08.166745 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.166906 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.167573 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167595 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.167730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167744 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.168231 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168297 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.168851 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.168901 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.173235 1558425 addons.go:238] Setting addon default-storageclass=true in "addons-301682"
	I0630 14:19:08.173294 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.173724 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.173785 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.184456 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0630 14:19:08.185663 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.186359 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.186383 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.186868 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.187481 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.187524 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.198676 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0630 14:19:08.199720 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0630 14:19:08.200624 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.201056 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0630 14:19:08.201384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.201425 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.201824 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.202320 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.202341 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.202767 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.203373 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.203425 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.203875 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.204017 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.204559 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.204608 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.204944 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.204958 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.205500 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.206106 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.206167 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.212484 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0630 14:19:08.213076 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.213762 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.213782 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.214717 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I0630 14:19:08.214882 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0630 14:19:08.215450 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.215549 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.216208 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216234 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216395 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216419 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216498 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.216551 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0630 14:19:08.217141 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.217198 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.218026 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218644 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218679 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0630 14:19:08.218965 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219098 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0630 14:19:08.219374 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.219416 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.219490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.219517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.219600 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219645 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.220038 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220058 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.220197 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220208 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.222722 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0630 14:19:08.222897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0630 14:19:08.223028 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.223845 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.223892 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.224072 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0630 14:19:08.224388 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36347
	I0630 14:19:08.224623 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.225142 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.225164 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.225248 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I0630 14:19:08.225593 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226043 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226641 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.226692 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.227826 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.228314 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.228351 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.228730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.228753 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.228834 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.228874 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0630 14:19:08.229220 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.229470 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.229681 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.229725 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.230097 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.230128 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.240167 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.240974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.241058 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0630 14:19:08.243477 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.243596 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0630 14:19:08.261647 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0630 14:19:08.261668 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I0630 14:19:08.261862 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0630 14:19:08.262201 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0630 14:19:08.261652 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I0630 14:19:08.261852 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0630 14:19:08.262971 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.263041 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263580 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263640 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263642 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263689 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263697 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263766 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263767 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264204 1558425 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0630 14:19:08.264700 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264710 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264910 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.264924 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265056 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265067 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265244 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265261 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265313 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265330 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265397 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265504 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265522 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265580 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265661 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265661 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:08.265674 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265689 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0630 14:19:08.265696 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265706 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265712 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.265940 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265988 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266721 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266732 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266787 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266802 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266873 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266885 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266892 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266920 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266927 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266935 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266948 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266963 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267095 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267169 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267219 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267412 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267464 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267868 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.267912 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.268375 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.268443 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.268484 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.269549 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.269597 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.270926 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.272833 1558425 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0630 14:19:08.274128 1558425 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.274146 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0630 14:19:08.274171 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.274859 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275064 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275721 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.276192 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275698 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.277235 1558425 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0630 14:19:08.277261 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 14:19:08.277735 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.277888 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0630 14:19:08.277911 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
	I0630 14:19:08.278583 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.278754 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.278813 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.278881 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0630 14:19:08.278897 1558425 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0630 14:19:08.278922 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279033 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.279041 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 14:19:08.279054 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279564 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.279577 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0630 14:19:08.279593 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279642 1558425 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.35
	I0630 14:19:08.281429 1558425 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:08.281448 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0630 14:19:08.281468 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.281533 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.282713 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.283764 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284087 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284228 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:08.284248 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0630 14:19:08.284269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.284461 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284503 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284726 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.284883 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.284950 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284965 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285137 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.285324 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.285515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.285599 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285736 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286034 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.286041 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286069 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286207 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.286615 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.286628 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286660 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286673 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.287215 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287232 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.287998 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287988 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288619 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288647 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.288829 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.288982 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289082 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.289115 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289387 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289495 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289954 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.289983 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.290152 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290230 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290347 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290431 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.291154 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.292418 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.292454 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.292433 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.292721 1558425 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-301682"
	I0630 14:19:08.292738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.292763 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.292887 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.293016 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.293150 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.293200 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.294549 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0630 14:19:08.296018 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0630 14:19:08.297203 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0630 14:19:08.298509 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0630 14:19:08.299741 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0630 14:19:08.301072 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0630 14:19:08.302287 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0630 14:19:08.303246 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0630 14:19:08.303926 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.304284 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0630 14:19:08.304575 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.304600 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.305069 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.305303 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.305513 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0630 14:19:08.305597 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0630 14:19:08.305646 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0630 14:19:08.308495 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0630 14:19:08.309009 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309265 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309301 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309500 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.309544 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309729 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.309915 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.310105 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.310445 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.310557 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.310962 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.310986 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312430 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.312542 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0630 14:19:08.312690 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.312715 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43567
	I0630 14:19:08.312896 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.312908 1558425 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:08.312914 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312922 1558425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 14:19:08.312899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.312950 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.312967 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35789
	I0630 14:19:08.313116 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.313130 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.313608 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.313798 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.314003 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314075 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.314701 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314761 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.314826 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.315163 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315447 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.315638 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315743 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.315801 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.316217 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.316239 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.316441 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.317458 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.317755 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.318404 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.318763 1558425 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0630 14:19:08.319446 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.319608 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.319686 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.319964 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.319978 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320265 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.320279 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:08.320350 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.320357 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320810 1558425 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0630 14:19:08.320976 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0630 14:19:08.321001 1558425 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0630 14:19:08.321024 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.321215 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0630 14:19:08.322277 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0630 14:19:08.322294 1558425 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0630 14:19:08.322314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323097 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323112 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.323135 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:08.323167 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.323175 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:08.323273 1558425 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0630 14:19:08.323158 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.323505 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323867 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0630 14:19:08.323883 1558425 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0630 14:19:08.323899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323920 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.323964 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0630 14:19:08.324118 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.324491 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.324603 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0630 14:19:08.324644 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.324757 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.325272 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.325293 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.327148 1558425 out.go:177]   - Using image docker.io/registry:3.0.0
	I0630 14:19:08.328448 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328463 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.328471 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0630 14:19:08.328485 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.328486 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0630 14:19:08.328506 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0630 14:19:08.328469 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.328555 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.329271 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329296 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329298 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329306 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.329427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329488 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329522 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329831 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329844 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329873 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329893 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.329908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329932 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.329965 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.330048 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330100 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330127 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.330233 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330571 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.330635 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330797 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.331366 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.331539 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.333151 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.333196 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.333924 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.333946 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.334093 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.334267 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.334413 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.334534 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.335093 1558425 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0630 14:19:08.336351 1558425 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:08.336368 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0630 14:19:08.336384 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.339580 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340100 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.340140 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.340523 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.340672 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.340813 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.350360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0630 14:19:08.350984 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.351790 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.351819 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.352186 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.352420 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.354260 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.356054 1558425 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0630 14:19:08.357435 1558425 out.go:177]   - Using image docker.io/busybox:stable
	I0630 14:19:08.358781 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:08.358803 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0630 14:19:08.358828 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.362552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.362966 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.362990 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.363100 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.363314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.363506 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.363630 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.439689 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 14:19:08.476644 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:19:08.843915 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.877498 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.886078 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0630 14:19:08.886117 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0630 14:19:08.911521 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.934599 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:09.020016 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:09.040482 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0630 14:19:09.040511 1558425 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0630 14:19:09.043569 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:09.148704 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:09.202814 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0630 14:19:09.202869 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0630 14:19:09.278194 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0630 14:19:09.278231 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0630 14:19:09.295189 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:09.295224 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0630 14:19:09.299217 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0630 14:19:09.299263 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0630 14:19:09.332360 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0630 14:19:09.332403 1558425 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0630 14:19:09.352402 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0630 14:19:09.352438 1558425 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0630 14:19:09.405398 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:09.451227 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:09.755506 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0630 14:19:09.755546 1558425 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0630 14:19:09.891227 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0630 14:19:09.891271 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0630 14:19:09.920129 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:09.920177 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0630 14:19:09.934092 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0630 14:19:09.934135 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0630 14:19:09.987104 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:09.987162 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0630 14:19:10.065936 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:10.412611 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0630 14:19:10.412651 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0630 14:19:10.472848 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:10.472884 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0630 14:19:10.534908 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:10.637801 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0630 14:19:10.637839 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0630 14:19:10.658361 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:10.787257 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0630 14:19:10.787289 1558425 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0630 14:19:10.989751 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:11.047653 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0630 14:19:11.047693 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0630 14:19:11.196682 1558425 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.196715 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0630 14:19:11.291758 1558425 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.852019855s)
	I0630 14:19:11.291806 1558425 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 14:19:11.291816 1558425 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.815128335s)
	I0630 14:19:11.292560 1558425 node_ready.go:35] waiting up to 6m0s for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314454 1558425 node_ready.go:49] node "addons-301682" is "Ready"
	I0630 14:19:11.314498 1558425 node_ready.go:38] duration metric: took 21.89293ms for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314515 1558425 api_server.go:52] waiting for apiserver process to appear ...
	I0630 14:19:11.314579 1558425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:19:11.614705 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0630 14:19:11.614735 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0630 14:19:11.736486 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0630 14:19:11.736514 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0630 14:19:11.778191 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.869515 1558425 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-301682" context rescaled to 1 replicas
	I0630 14:19:12.215816 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0630 14:19:12.215858 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0630 14:19:12.875440 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0630 14:19:12.875469 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0630 14:19:13.113763 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0630 14:19:13.113791 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0630 14:19:13.233897 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.233936 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0630 14:19:13.547481 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.908710 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.064741353s)
	I0630 14:19:13.908777 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.031226379s)
	I0630 14:19:13.908828 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908848 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908846 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.997298204s)
	I0630 14:19:13.908863 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908877 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908789 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908930 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908964 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.974334377s)
	I0630 14:19:13.908996 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909007 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909009 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.888949022s)
	I0630 14:19:13.909048 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909061 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909699 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.909716 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.909725 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909733 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910126 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910140 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910150 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910156 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910411 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910438 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910445 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910452 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910457 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910696 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910727 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910744 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910751 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910757 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.911970 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912059 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912080 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912106 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.912127 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.912244 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912321 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912376 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912399 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912409 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912423 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912436 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912476 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912487 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913952 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:15.489658 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0630 14:19:15.489718 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:15.493165 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493587 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:15.493623 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493976 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:15.494223 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:15.494515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:15.494707 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:15.765543 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0630 14:19:15.978232 1558425 addons.go:238] Setting addon gcp-auth=true in "addons-301682"
	I0630 14:19:15.978326 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:15.978844 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:15.978897 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:15.997982 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0630 14:19:15.998461 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:15.999138 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:15.999166 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:15.999618 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.000381 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:16.000428 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:16.018425 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0630 14:19:16.018996 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:16.019552 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:16.019578 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:16.020118 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.020378 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:16.022570 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:16.022848 1558425 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0630 14:19:16.022880 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:16.026200 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027053 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:16.027107 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027360 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:16.027605 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:16.027797 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:16.027986 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:16.771513 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.727888765s)
	I0630 14:19:16.771570 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.622822849s)
	I0630 14:19:16.771591 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771607 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771630 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771647 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771647 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.36619116s)
	I0630 14:19:16.771673 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771688 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771767 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.320503654s)
	I0630 14:19:16.771831 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771842 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.705862816s)
	I0630 14:19:16.771865 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771873 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771904 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.236967233s)
	I0630 14:19:16.771940 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771966 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771989 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.113597897s)
	I0630 14:19:16.772016 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772026 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772112 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.782331879s)
	I0630 14:19:16.772132 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772140 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772199 1558425 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.457605469s)
	I0630 14:19:16.772216 1558425 api_server.go:72] duration metric: took 8.637102064s to wait for apiserver process to appear ...
	I0630 14:19:16.772223 1558425 api_server.go:88] waiting for apiserver healthz status ...
	I0630 14:19:16.772245 1558425 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0630 14:19:16.771847 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772472 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772489 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772500 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772508 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772567 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772660 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772670 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772678 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772685 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772744 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772768 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772774 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772782 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772789 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773055 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773073 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773096 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773119 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773125 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773131 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773137 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773371 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773380 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773388 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773398 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773540 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773583 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773592 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773602 1558425 addons.go:479] Verifying addon registry=true in "addons-301682"
	I0630 14:19:16.773651 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773661 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773668 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773675 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773927 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773965 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774128 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774333 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774357 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774383 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774389 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774656 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774694 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774695 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774703 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774710 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774722 1558425 addons.go:479] Verifying addon ingress=true in "addons-301682"
	I0630 14:19:16.774767 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774700 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774931 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.774943 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.774797 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775055 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.775066 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.775086 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.775936 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775954 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776331 1558425 out.go:177] * Verifying ingress addon...
	I0630 14:19:16.776373 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776407 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776413 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776457 1558425 out.go:177] * Verifying registry addon...
	I0630 14:19:16.776565 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776586 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776591 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776599 1558425 addons.go:479] Verifying addon metrics-server=true in "addons-301682"
	I0630 14:19:16.776668 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776681 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.778466 1558425 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0630 14:19:16.779098 1558425 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-301682 service yakd-dashboard -n yakd-dashboard
	
	I0630 14:19:16.779694 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0630 14:19:16.788556 1558425 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0630 14:19:16.789906 1558425 api_server.go:141] control plane version: v1.33.2
	I0630 14:19:16.789941 1558425 api_server.go:131] duration metric: took 17.709666ms to wait for apiserver health ...
	I0630 14:19:16.789955 1558425 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 14:19:16.796628 1558425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0630 14:19:16.796662 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:16.796921 1558425 system_pods.go:59] 15 kube-system pods found
	I0630 14:19:16.796954 1558425 system_pods.go:61] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.796961 1558425 system_pods.go:61] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.796972 1558425 system_pods.go:61] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.796976 1558425 system_pods.go:61] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.796984 1558425 system_pods.go:61] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.796987 1558425 system_pods.go:61] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.796992 1558425 system_pods.go:61] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.796997 1558425 system_pods.go:61] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.797004 1558425 system_pods.go:61] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.797011 1558425 system_pods.go:61] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.797018 1558425 system_pods.go:61] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.797028 1558425 system_pods.go:61] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.797035 1558425 system_pods.go:61] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.797042 1558425 system_pods.go:61] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.797049 1558425 system_pods.go:61] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.797057 1558425 system_pods.go:74] duration metric: took 7.094316ms to wait for pod list to return data ...
	I0630 14:19:16.797068 1558425 default_sa.go:34] waiting for default service account to be created ...
	I0630 14:19:16.798790 1558425 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0630 14:19:16.798807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:16.809885 1558425 default_sa.go:45] found service account: "default"
	I0630 14:19:16.809914 1558425 default_sa.go:55] duration metric: took 12.83884ms for default service account to be created ...
	I0630 14:19:16.809925 1558425 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 14:19:16.818226 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.818251 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.818525 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.818587 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:16.818715 1558425 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0630 14:19:16.836146 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.836179 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.836489 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.836539 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.898260 1558425 system_pods.go:86] 15 kube-system pods found
	I0630 14:19:16.898321 1558425 system_pods.go:89] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.898334 1558425 system_pods.go:89] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.898347 1558425 system_pods.go:89] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.898355 1558425 system_pods.go:89] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.898364 1558425 system_pods.go:89] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.898371 1558425 system_pods.go:89] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.898380 1558425 system_pods.go:89] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.898390 1558425 system_pods.go:89] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.898398 1558425 system_pods.go:89] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.898406 1558425 system_pods.go:89] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.898431 1558425 system_pods.go:89] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.898443 1558425 system_pods.go:89] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.898451 1558425 system_pods.go:89] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.898461 1558425 system_pods.go:89] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.898471 1558425 system_pods.go:89] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.898485 1558425 system_pods.go:126] duration metric: took 88.551205ms to wait for k8s-apps to be running ...
	I0630 14:19:16.898500 1558425 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 14:19:16.898565 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:19:17.317126 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:17.374411 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.596164186s)
	W0630 14:19:17.374478 1558425 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.374547 1558425 retry.go:31] will retry after 162.408109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.425522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.537869 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:17.785630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.785674 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.306660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.306889 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.552015 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.004467325s)
	I0630 14:19:18.552194 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552225 1558425 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.529350239s)
	I0630 14:19:18.552276 1558425 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.653693225s)
	I0630 14:19:18.552302 1558425 system_svc.go:56] duration metric: took 1.653798008s WaitForService to wait for kubelet
	I0630 14:19:18.552318 1558425 kubeadm.go:578] duration metric: took 10.417201876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:19:18.552241 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552348 1558425 node_conditions.go:102] verifying NodePressure condition ...
	I0630 14:19:18.552645 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552664 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552675 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552686 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552919 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552936 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552948 1558425 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:18.554300 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:18.555232 1558425 out.go:177] * Verifying csi-hostpath-driver addon...
	I0630 14:19:18.556214 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0630 14:19:18.556827 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0630 14:19:18.557433 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0630 14:19:18.557459 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0630 14:19:18.596354 1558425 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 14:19:18.596393 1558425 node_conditions.go:123] node cpu capacity is 2
	I0630 14:19:18.596408 1558425 node_conditions.go:105] duration metric: took 44.050461ms to run NodePressure ...
	I0630 14:19:18.596422 1558425 start.go:241] waiting for startup goroutines ...
	I0630 14:19:18.603104 1558425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0630 14:19:18.603135 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:18.637868 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0630 14:19:18.637900 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0630 14:19:18.748099 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:18.748163 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0630 14:19:18.792604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.792626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.843691 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:19.062533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.282741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.282766 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:19.563538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.721889 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.183953285s)
	I0630 14:19:19.721971 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.721990 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.722705 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:19.722805 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.722841 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.722861 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.722870 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.723362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.723392 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.784854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.785087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.084451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.338994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.339229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.491192 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.647431709s)
	I0630 14:19:20.491275 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491294 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491664 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.491685 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.491696 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491704 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491987 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:20.492026 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.492052 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.493344 1558425 addons.go:479] Verifying addon gcp-auth=true in "addons-301682"
	I0630 14:19:20.495394 1558425 out.go:177] * Verifying gcp-auth addon...
	I0630 14:19:20.497751 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0630 14:19:20.544088 1558425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0630 14:19:20.544122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:20.616283 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.790338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.794229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.001876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.103156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.286215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:21.287404 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.501971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.603568 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.782426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.783543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.002607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.061769 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.283406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.283458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.501544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.563768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.782065 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.785105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.001506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.062272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.283151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.283566 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.501628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.782561 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.783298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.001778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.062179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.351397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.351533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:24.502302 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.560819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.783532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.783606 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.000665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.066861 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.283070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:25.283328 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.501446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.566260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.782894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.783547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.005011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.064792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.283606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.502271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.561300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.782991 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.783050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.001311 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.061332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.282733 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:27.284226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.501814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.562410 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.783241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.783497 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.002164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.060264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.282980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:28.283180 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.500523 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.560485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.783545 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.000985 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.061185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.282663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.282792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.500648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.560782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.782042 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.783619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.001946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.060881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.282133 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:30.283049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.500975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.782609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.782603 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.001534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.060703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.282157 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.283847 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:31.500628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.560669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.782294 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.782820 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.001862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.061034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.281959 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:32.282969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.501719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.783890 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.001382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.060618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:33.289955 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.501909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.782531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.784168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.003605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.060279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.282397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:34.282808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.613798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.614652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.782800 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.000818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.060998 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.282231 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.283653 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:35.509348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.560724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.783017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.001083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.060369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.702785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.703123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.703555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.706970 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:36.804241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.804456 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.001688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.061214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.282908 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.284915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:37.500826 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.560092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.782407 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.784106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.061107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:38.282046 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:38.283180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.501297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.563927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.189422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.189531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.190495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.191248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.282505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.282920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.500781 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.560685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.781821 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.001299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.071624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.283182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.283221 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:40.501026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.560313 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.783565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.783591 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.002088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.079056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.283365 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.283894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:41.501095 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.565670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.781792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.782774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.000619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.060899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:42.283068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.501445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.560361 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.783776 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.783964 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.001605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.060231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.284417 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:43.284499 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.501005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.560455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.782135 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.783795 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.001747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.061008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:44.281520 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:44.282610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.501859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.561166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.190446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.291473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291489 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.291572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.293575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.501432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.560935 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.782091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.783835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.001576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.060855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.281632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.282695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.500503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.560648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.781708 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.783401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.001349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.060664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.288991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.289151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.501378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.783679 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.783934 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.063640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.283018 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.288264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.501060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.782532 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.783014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.060136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.284470 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.284616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.501493 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.560740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.782176 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.783205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.001724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.061175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.285556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.285655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.501435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.561083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.782238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.783288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.001421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.060971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.312768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.312922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.501057 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.560396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.782795 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.783117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.001134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.060267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.283193 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.283291 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.502021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.560380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.783076 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.784387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.001939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.061183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.281990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:53.283259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.502028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.560640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.782501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.783649 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.001220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.061666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.282039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.283121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.501316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.560447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.783504 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.783727 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.000517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.061087 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.282418 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.283456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:55.502008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.560325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.783555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.001431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.060991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.282249 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.283767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.501025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.560838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.782271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.782994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.001527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.061065 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.283743 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.283956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:57.502182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.560567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.783238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.783763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.001345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.060462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.282685 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.282967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:58.501929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.561387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.782616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.783122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.001904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.282072 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:59.282798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.501590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.561148 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.783157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.783870 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.000897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.061506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.281697 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.282838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:00.500884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.561577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.783296 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.002271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.061072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.282434 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:01.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.501896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.561570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.782586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.782842 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.000727 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.282765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:02.282809 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.501507 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.782628 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.782871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.001603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.060848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.282653 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.283752 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.560629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.781639 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.782897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.283389 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.283730 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:04.500996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.783260 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.001555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.060738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.282896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.282927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.501053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.602159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.783741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.783966 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.001070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.060590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.282798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.500761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.560993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.784950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.785237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.001699 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.061334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.282883 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.283203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.502196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.561691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.783652 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.001648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.061773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.281568 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.283567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.502500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.561076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.782892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.783238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.001899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.282681 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.283009 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:09.501744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.561385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.782769 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.783806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.282325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:10.283050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.501741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.783200 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.001016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.060512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.283758 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.284197 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:11.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.560441 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.782907 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.783577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.001888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.060849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.282280 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:12.282418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.501807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.002304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.061129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.283315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.501972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.561333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.783487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.783655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.001242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.282022 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.283080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:14.501717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.560630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.781894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.782368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.282562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.282888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.500950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.560206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.782473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.783016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.001340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.283196 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:16.501224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.560432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.783077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.783121 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.281574 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.282511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:17.502499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.560896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.781956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.782624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.000392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.060943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.283184 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.283879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.501537 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.562926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.782451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.001149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.061264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.282752 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:19.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.560605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.782509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.782554 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.002254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.282485 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:20.500924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.561822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.002205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.060747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.282021 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.282563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.505254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.561819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.782724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.000999 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.060710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.281865 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:22.282163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.562175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.782908 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.782992 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.001604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.061218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.282416 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.282830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:23.501539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.562050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.784161 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.001477 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.060126 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.282030 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.283809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.501806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.602840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.782907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.000878 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.061123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.282013 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.283761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.504764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.606761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.782107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.782874 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.000621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.061556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.285974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.286315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.502580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.561105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.000735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.061233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.282071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:27.285152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.501573 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.561120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.782732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.782840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.000630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.060922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.283472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.501080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.560454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.782967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.782976 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.237835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.237889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.336150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.336331 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.501907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.602786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.782929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.001264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.060690 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.281762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.282475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.501884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.572349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.783064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.783109 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.002526 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.062561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:31.283179 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.501139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.560586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.784336 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.784346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.001433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.290054 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.291744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.500808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.568201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.782533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.001710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.282933 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.284426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.501589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.561081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.784027 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.784261 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.002823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.063430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.284309 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.285663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.500807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.561036 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.784211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.784213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.001454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.061492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.281525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.282364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.501644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.560943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.783199 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.783563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.002111 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.060708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.281535 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:36.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.861446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.861593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.965825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.966272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.061370 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.283380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:37.283513 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.501468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.561192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.785517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.786292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.061069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.284714 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.284846 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.502574 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.783069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.001928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.061873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.282406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.283481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.503169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.561098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.782813 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.783641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.002181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.060266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.282891 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.283849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.500843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.782926 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.783029 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.001321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.281798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.284037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.502572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.782285 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.001897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.283725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.283888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:42.501480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.561461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.782548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.782713 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.093940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.097843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.282818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.282819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:43.501106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.560130 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.782663 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.783944 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.001422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.060503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.281922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:44.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.501600 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.782953 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.001192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.060597 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.283117 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.501174 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.560528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.786937 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.787508 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.003194 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.061532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.283078 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.283645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.501606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.783577 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.061088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.282533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.501685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.783792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.783801 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.000652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.061347 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.282791 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.283149 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.501196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.560571 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.782724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.783665 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.001578 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.060917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.283443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.283529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.501548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.560886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.782606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.782806 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.001040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.060499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.282867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.283070 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.501307 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.782746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.782790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.000827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.061599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.281741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.282303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:51.501882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.561159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.782745 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.784064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.001127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.060734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.281924 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.282442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.501618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.560955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.001976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.060014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.283833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.283868 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.501946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.787788 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.788281 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.001841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.282587 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.282894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.501076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.560738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.783982 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.784379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.001546 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.061794 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.282534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:55.283165 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.501579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.560818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.001725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.282248 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.283345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:56.501508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.781927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.783218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.001706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.061118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.283582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.283762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:57.501038 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.560439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.783590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.783720 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.001746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.061827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.282480 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.282960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:58.501434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.561028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.781998 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.782879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.001764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.061200 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.282609 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:59.282747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.501377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.560960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.785243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.785330 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.001691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.061010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.283580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.561741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.784015 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.784091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.060981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.282859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:01.283036 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.501809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.561922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.782501 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.783709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.002244 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.061572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.284366 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.501516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.562167 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.782718 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.783603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.002195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.060569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.283492 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:03.501693 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.783852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.784006 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.000924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.061226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.282297 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.282987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.501089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.560458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.783361 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.001357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.060980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.282432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.284945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:05.501078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.560392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.782556 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.782745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.001356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.060485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.282979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:06.500697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.561446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.783120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.783258 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.001429 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.060755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.281892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.282422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.501870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.561285 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.783869 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.001179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.061434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.282620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:08.282643 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.501890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.561334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.782409 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.060624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.283843 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:09.500869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.561327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.786343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.786990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.001363 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.061669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.281724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.283241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.501499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.560382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.783379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.783703 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.006867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.061528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.282068 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.284097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:11.501425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.561482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.781830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.003000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.061220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.283490 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.283632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.502107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.560563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.786245 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.787717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.002660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.061638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.282127 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.283171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.501269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.560543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.783150 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.783156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.001885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.061206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.283314 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.283499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.505208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.782762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.003346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.282760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.284010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.501266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.560665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.781811 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.782474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.263325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.263338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.283738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.502117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.604450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.783760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.005983 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.105360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.500988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.560342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.782772 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.007857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.061140 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.283796 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.501209 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.560948 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.783319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.783461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.001371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.061031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.282807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:19.283969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.501517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.561032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.782932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.783012 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.005480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.060901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.283412 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:20.502027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.782626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.783395 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.001871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.061472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.283060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.283210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.782741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.783745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.001089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.060638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.283014 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.560933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.782511 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.783627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.001249 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.060586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.281968 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.282925 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.501824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.561702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.781838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.782821 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.000909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.061364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.282635 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.282833 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.500870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.561501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.783353 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.783411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.001919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.060593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.501682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.560920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.782607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.001990 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.062631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.281975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:26.283634 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.502337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.561388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.783873 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.000786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.061090 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.282519 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.283219 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:27.502098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.560684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.782103 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.782356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.001961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.283082 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:28.283091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.502080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.560369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.782819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.782888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.001300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.060528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.282927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:29.500881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.561931 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.782352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.001314 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.061754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:30.283911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.501691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.561708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.783505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.018759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.118123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.283780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.283813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:31.500732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.561257 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.782789 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.783857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.000941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.061352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.283225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.283376 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.502377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.560813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.782071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.782893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.001627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.061719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.282356 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.501995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.560218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.783100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.783628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.061301 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.282792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.283319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.502265 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.603312 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.783237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.001558 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.282165 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.501433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.782571 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.783567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.001993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.060500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.282630 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:36.282912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.501547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.561085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.783668 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.783838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.001644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.061735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.282616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.283047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.501624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.562291 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.783863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.784060 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.001210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.060997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.283100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.283242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.501949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.561400 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.783522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.783562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.001632 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.061775 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.283431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:39.283517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.502108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.782288 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.783100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.061613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.282272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.282780 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.782057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.783645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.002564 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.062621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.282271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:41.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.501391 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.562411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.783324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.783579 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.002705 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.061893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.282583 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.283671 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.502733 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.562940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.782853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.783073 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.001824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.062102 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.282830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.283751 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.501119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.560492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.784115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.784145 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.001522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.061345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.282831 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.283549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:44.503997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.607178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.782832 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.783717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.002427 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.061729 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.282878 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:45.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.501997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.783552 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.783659 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.001682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.062807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.282597 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.283939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.503275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.561513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.784613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.784911 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.061725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:47.283405 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.501322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.561186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.782927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.784021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.001774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.282175 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.283210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.502097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.561677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.783039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.001787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.071403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.282882 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.283702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.501062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.560808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.781892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.782731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.001262 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.282041 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:50.283114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.501527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.561365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.786406 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.786567 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.001808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.061553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.282657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:51.283296 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.501742 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.561178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.782922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.783680 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.061514 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.282067 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.282621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.502198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.561158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.782564 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.782792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.001035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.060667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.281989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.283220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.501930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.560987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.783173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.004903 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.061068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.281852 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.282368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.501595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.561905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.782333 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.783021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.001532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.060924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.281744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.282438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.501581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.561843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.783311 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.784241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.001655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.061418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.282846 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.501645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.562026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.782767 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.000993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.061640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.282555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:57.284099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.501478 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.561337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.001026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.061636 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.283771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:58.284039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.501701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.564159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.782721 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.783561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.001195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.062667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.286778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.287064 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:59.501183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.560532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.783236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.783406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.062563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.283855 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.284134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.564486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.782887 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.782984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.061955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.283003 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.283746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.501317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.560704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.782191 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.783094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.001320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.061973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.283076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.283282 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:02.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.561666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.783208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.783342 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.004810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.063284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.283432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.283755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:03.501473 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.782327 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.783798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.001354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.060898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.283327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.283635 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.501503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.560912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.782536 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.783678 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.001055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.284013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.501292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.782798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.001516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.061337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.283371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.502565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.562077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.783138 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.783697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.062329 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.282379 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.282968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.501169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.560984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.782268 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.784049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.001494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.061308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.283724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.284185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.502230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.560967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.783790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.783900 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.001053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.060828 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.283284 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.283806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.501109 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.560617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.783349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.001664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.061833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.283401 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:10.283402 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.501704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.560961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.783469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.783522 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.001757 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.061124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.283792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:11.283989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.501103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.560840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.782033 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.783604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.003374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.060433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.282976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.283110 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:12.501047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.783921 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.784167 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.002696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.063144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.282766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:13.282879 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.501555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.561637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.781893 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.782616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.001004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.283205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.283446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:14.501550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.562143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.783957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.784112 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.001423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.062033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.282424 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:15.501071 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.560348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.782780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.783648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.282525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:16.283260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.501360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.560258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.783827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.783875 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.001565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.060813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.283097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:17.501048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.560778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.781850 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.783463 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.002176 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.060602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.501844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.783600 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.783637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.002695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.061454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.282337 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:19.284196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.501898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.566207 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.783150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.783388 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.001915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.063129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.284273 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.285468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:20.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.560957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.785008 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.785055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.001554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.061007 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.290166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.290315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.607046 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.783112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.001610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.061225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.282696 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:22.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.501584 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.562703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.782599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.783389 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.002163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.283818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.283940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:23.501359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.561687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.781738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.783834 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.001106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.060840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.283144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.283159 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:24.501879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.561177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.784299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.784387 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.001461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.060909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.282763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.283372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:25.501554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.782472 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.002067 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.060538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.282323 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:26.284932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.501783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.561217 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.786385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.786624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.002328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.060923 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.283369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:27.502704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.561567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.783609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.001238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.061117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.283592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:28.283779 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.503754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.561835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.783295 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.783426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.061565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.284407 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.284751 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.501482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.561448 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.783747 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.000612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.061762 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.282244 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.282945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.501114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.561086 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.783309 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.783420 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.001952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.060101 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.282326 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.284221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:31.501777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.561372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.783156 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.783322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.002694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.061381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:32.284529 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.505575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.566298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.784512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.784864 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.001675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.060993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.283872 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.501278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.560542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.787772 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.787934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.001324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.060773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.282840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:34.502371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.560627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.783094 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.783413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.002904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.061777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.283934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.501100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.560247 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.784358 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.001812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.062616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.282087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:36.282661 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.500966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.562267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.783442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.001767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.061035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.282352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:37.501481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.562204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.782528 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.783035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.001204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.060871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.282324 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.283278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.501823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.562308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.784023 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.784618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.000984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.062203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.283474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.502760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.563797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.782847 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.782939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.061550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.281624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.282091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.501221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.560905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.782931 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.782945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.002061 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.061582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.283006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.283254 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:41.501580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.785372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.785518 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.001833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.064672 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.282529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.283845 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:42.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.783728 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.784425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.002525 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.061268 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.283438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.283504 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:43.501326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.561048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.782534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.782716 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.001543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.062385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.282669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.283862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:44.501191 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.562184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.002615 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.282873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.283074 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.501319 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.560538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.781794 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.783447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.002122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.060715 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.282111 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:46.282760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.501006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.560037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.784753 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.784785 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.001157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.060804 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.283335 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:47.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.782851 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.783119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.001360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.282370 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:48.283342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.501709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.783888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.784092 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.001883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.283083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:49.283344 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.501731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.782681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.000966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.060550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.283074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:50.501643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.561462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.783025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.002569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.063186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.283275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:51.283325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.501455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.560436 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.782975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.783423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.001631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.061667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.281818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.282342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.501284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.560864 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.782151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.782348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.007368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.060641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.283706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.284276 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:53.501189 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.560654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.782398 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.782656 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.002682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.061286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.282383 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.283815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.501271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.560549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.790530 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.790755 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.001308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.284397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.284413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:55.501771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.781963 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.782941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.000822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.061650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.283524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.283580 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.501667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.560681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.782151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.281690 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.283202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.501647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.561213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.782612 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.001789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.282211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:58.284618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.500839 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.561378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.784612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.784669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.000744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.062091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.660112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.664035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:59.664534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.665074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.782692 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.783576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.003476 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.061094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.285714 1558425 kapi.go:107] duration metric: took 3m43.507242469s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0630 14:23:00.286859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.502299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.561094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.001892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.061673 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.501245 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.005689 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.283736 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.501952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.783177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.002017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.061604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.500854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.561092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.783701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.063589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.283519 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.501728 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.566277 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.002269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:05.060852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.283974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.507100 1558425 kapi.go:107] duration metric: took 3m45.009344267s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0630 14:23:05.509228 1558425 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-301682 cluster.
	I0630 14:23:05.510978 1558425 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0630 14:23:05.512549 1558425 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0630 14:23:05.561380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.783374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.062392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.561684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.785144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.066028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.284562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.561973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.785021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.060666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.561745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.783877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.284091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.561492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.787449 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.062802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.284110 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.560730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.783003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.060643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.284380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.561869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.060853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.283759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.560457 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.784225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.061224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.560056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.783513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.061509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.283696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.561206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.784675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.061356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.284952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.784123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.061089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.786612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.061952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.284288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.561055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.061797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.783185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.061655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.285318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.561730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.782858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.061290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.284108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.560495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.783799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.060435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.560658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.784042 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.064259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.283397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.562304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.783790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.062882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.565989 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.061006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.284421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.561604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.783815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.060798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.283106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.572104 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.783229 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.283003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.783676 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.061789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.283647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.561595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.784152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.061056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.284078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.561025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.060975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.284112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.561034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.783332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.060612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.284928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.560487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.784282 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.061202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.283691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.561004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.783682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.283339 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.561471 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.783951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.060926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.283825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.563195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.783726 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.060359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.283321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.561124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.061349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.283415 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.784344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.061159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.283670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.562677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.783294 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.062782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.284848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.560236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.783962 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.060039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.283768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.560166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.782740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.060825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.284072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.561353 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.783269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.061500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.283553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.561115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.784062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.560612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.784453 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.061524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.283887 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.560352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.783080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.060608 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.283756 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.561250 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.783439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.061813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.284043 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.560423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.783723 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.062299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.283512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.562182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.783464 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.283290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.561127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.784143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.062746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.283685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.561750 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.783610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.061340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.284254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.783030 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.060658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.283841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.561356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.783263 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.061883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.561440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.783774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.060233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.561692 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.783771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.060778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.283008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.560248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.784031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.061426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.284243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.561964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.783354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.061484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.283980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.060942 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.284120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.782802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.059964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.283717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.560585 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.784927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.061040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.283344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.561904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.783533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.284877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.560774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.784163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.061765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.284774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.561857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.782773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.061141 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.283396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.561139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.783625 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.283747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.560949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.783456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.061482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.560735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.784827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.282806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.560671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.782706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.060646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.283286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.560657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.061560 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.283579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.561242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.783654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.061539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.283732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.560228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.783593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.061818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.561190 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.783368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.062755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.283379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.783976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.061115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.285316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.783381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.061707 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.560899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.783331 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.060911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.285242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.567687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.783399 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.284164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.561303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.784575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.062079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.283362 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.561544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.784026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.061171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.284055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.784816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.061671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.285032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.782955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.060555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.283695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.561223 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.784108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.061443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.283885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.560716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.783754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.061542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.282788 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.560770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.783579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.060318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.283045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.560843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.782930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.061222 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.282971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.783818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.060551 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.283550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.562179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.784378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.062214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.283320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.560609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.060891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.283079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.561022 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.783812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.060803 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.283620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.561450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.784169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.061522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.283646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.561354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.784907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.061231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.283357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.561047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.782954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.062644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.283870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.560460 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.783972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.061026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.283434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.560383 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.784236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.061863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.561072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.784790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.060929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.560849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.061044 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.283485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.560958 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.783343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.283256 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.560785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.783833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.063333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.561202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.783647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.060633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.283403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.561258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.783824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.560614 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.783666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.060343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.283562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.561179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.783181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.061128 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.284062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.560766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.783336 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.061890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.283765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.782988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.061782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.284045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.560892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.783646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.061732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.283168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.561039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.783011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.060663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.284034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.560401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.783929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.060886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.560898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.783070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.061272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.284495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.566045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.785033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.284857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.563055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.782917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.062050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.288461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.560836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.783182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.060851 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.282596 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.561215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.061881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.784227 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.061049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.283508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.560991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.783228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.061557 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.283945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.560814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.783480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.062151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.283328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.561147 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.061581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.284088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.561199 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.784000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.060829 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.283475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.783246 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.061297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.283184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.561060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.060947 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.284652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.560498 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.783783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.061342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.284840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.791617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.061618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.286833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.560475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.783629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.061136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.283837 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.562671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.783967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.060688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.283033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.560616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.783876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.060565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.283359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.561198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.783494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.062642 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.283954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.560177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.782981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.060549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.283643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.561232 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.783995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.060913 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.283540 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.561001 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.061494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.561423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.783816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.061121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.283938 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.560330 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.061253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.283468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.783656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.061451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.284555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.561027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.783118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.060941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.283486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.783987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.061469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.282865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.560230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.783905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.060919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.284341 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.561725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.061064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.283364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.061012 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.560317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.783830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.060685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.283378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.561716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.782965 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.061099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.282813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.783665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.061372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.282565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.561326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.783180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.060939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.283013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.783206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.283487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.560928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.779853 1558425 kapi.go:107] duration metric: took 6m0.000148464s to wait for kubernetes.io/minikube-addons=registry ...
	W0630 14:25:16.780114 1558425 out.go:270] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0630 14:25:17.061823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:17.560570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.061810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.557742 1558425 kapi.go:107] duration metric: took 6m0.000905607s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0630 14:25:18.557918 1558425 out.go:270] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0630 14:25:18.560047 1558425 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, ingress, gcp-auth
	I0630 14:25:18.561439 1558425 addons.go:514] duration metric: took 6m10.426236235s for enable addons: enabled=[cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots ingress gcp-auth]
	I0630 14:25:18.561506 1558425 start.go:246] waiting for cluster config update ...
	I0630 14:25:18.561537 1558425 start.go:255] writing updated cluster config ...
	I0630 14:25:18.561951 1558425 ssh_runner.go:195] Run: rm -f paused
	I0630 14:25:18.569844 1558425 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:18.574216 1558425 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.580161 1558425 pod_ready.go:94] pod "coredns-674b8bbfcf-gcxhf" is "Ready"
	I0630 14:25:18.580187 1558425 pod_ready.go:86] duration metric: took 5.939771ms for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.583580 1558425 pod_ready.go:83] waiting for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.589631 1558425 pod_ready.go:94] pod "etcd-addons-301682" is "Ready"
	I0630 14:25:18.589656 1558425 pod_ready.go:86] duration metric: took 6.047747ms for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.592675 1558425 pod_ready.go:83] waiting for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.598838 1558425 pod_ready.go:94] pod "kube-apiserver-addons-301682" is "Ready"
	I0630 14:25:18.598865 1558425 pod_ready.go:86] duration metric: took 6.165834ms for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.608664 1558425 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.974819 1558425 pod_ready.go:94] pod "kube-controller-manager-addons-301682" is "Ready"
	I0630 14:25:18.974852 1558425 pod_ready.go:86] duration metric: took 366.160564ms for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.183963 1558425 pod_ready.go:83] waiting for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.575199 1558425 pod_ready.go:94] pod "kube-proxy-cm28f" is "Ready"
	I0630 14:25:19.575240 1558425 pod_ready.go:86] duration metric: took 391.247311ms for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.774681 1558425 pod_ready.go:83] waiting for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.173968 1558425 pod_ready.go:94] pod "kube-scheduler-addons-301682" is "Ready"
	I0630 14:25:20.174011 1558425 pod_ready.go:86] duration metric: took 399.300804ms for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.174030 1558425 pod_ready.go:40] duration metric: took 1.603886991s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:20.223671 1558425 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 14:25:20.225538 1558425 out.go:177] * Done! kubectl is now configured to use "addons-301682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.242712753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294173242683969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53556fe7-31e1-46e3-a93d-9c35480b2b8c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.243697681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8fffcd5-2980-4e65-8d6d-813d98ec03ab name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.243921781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8fffcd5-2980-4e65-8d6d-813d98ec03ab name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.244516198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kub
ernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{i
o.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d
8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395a
f61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f5
7040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b
63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e172
6a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510
e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fc
d0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/intercept
ors.go:74" id=c8fffcd5-2980-4e65-8d6d-813d98ec03ab name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.290024807Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84b66eff-04e1-4f3d-a7c5-f6c0880bd8ee name=/runtime.v1.RuntimeService/Version
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.290155518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84b66eff-04e1-4f3d-a7c5-f6c0880bd8ee name=/runtime.v1.RuntimeService/Version
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.291927498Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b407417e-650e-4999-9bce-6ac34fd5dfc0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.293605819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294173293495011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b407417e-650e-4999-9bce-6ac34fd5dfc0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.294321709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=908f72e6-dffd-40a2-94e6-3fe8c660eac3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.294376517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=908f72e6-dffd-40a2-94e6-3fe8c660eac3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.295075762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kub
ernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{i
o.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d
8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395a
f61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f5
7040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b
63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e172
6a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510
e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fc
d0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/intercept
ors.go:74" id=908f72e6-dffd-40a2-94e6-3fe8c660eac3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.341596768Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c083d62-9aa9-47a6-9329-4fa85258b8ce name=/runtime.v1.RuntimeService/Version
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.341819467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c083d62-9aa9-47a6-9329-4fa85258b8ce name=/runtime.v1.RuntimeService/Version
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.343895540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e661e696-f3f0-44b3-9312-f5595bbd3273 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.345760437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294173345726670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e661e696-f3f0-44b3-9312-f5595bbd3273 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.347941452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02f1231e-1a83-4f47-898f-875cebb14fe2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.348159465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02f1231e-1a83-4f47-898f-875cebb14fe2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.349067250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kub
ernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{i
o.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d
8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395a
f61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f5
7040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b
63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e172
6a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510
e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fc
d0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/intercept
ors.go:74" id=02f1231e-1a83-4f47-898f-875cebb14fe2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.391142440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5894f05d-2bf2-49ea-8d43-2c9f1a41d8a8 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.391306280Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5894f05d-2bf2-49ea-8d43-2c9f1a41d8a8 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.393397503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ceb3870d-7058-4340-bd37-164cf9c9fda5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.394988997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294173394950572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceb3870d-7058-4340-bd37-164cf9c9fda5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.395857221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a036a74-c198-4967-938e-aef70d1ca891 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.395956291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a036a74-c198-4967-938e-aef70d1ca891 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:36:13 addons-301682 crio[849]: time="2025-06-30 14:36:13.396876385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kub
ernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{i
o.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d
8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395a
f61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f5
7040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b
63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e172
6a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510
e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fc
d0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/intercept
ors.go:74" id=0a036a74-c198-4967-938e-aef70d1ca891 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	ccb1fec83c55c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          10 minutes ago      Running             busybox                                  0                   744d3a8558a51       busybox
	f4356fb8a203d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	505ec6a97e3e1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          11 minutes ago      Running             csi-provisioner                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	0e8810b68e820       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            12 minutes ago      Running             liveness-probe                           0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	977ef3af77456       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           13 minutes ago      Running             hostpath                                 0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	5dfe9d02b1b1a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                13 minutes ago      Running             node-driver-registrar                    0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	470ef449849e9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      14 minutes ago      Running             volume-snapshot-controller               0                   4736a1c095805       snapshot-controller-68b874b76f-m97pd
	90d1724e2a8e9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   14 minutes ago      Running             csi-external-health-monitor-controller   0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	089511c925cdb       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      14 minutes ago      Running             volume-snapshot-controller               0                   901b27bd18ec3       snapshot-controller-68b874b76f-zvnk2
	c2e8c85ce8151       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              15 minutes ago      Running             csi-resizer                              0                   754958dc28d19       csi-hostpath-resizer-0
	ba49554ce7e85       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             15 minutes ago      Running             csi-attacher                             0                   ef302c090f9a8       csi-hostpath-attacher-0
	87b37034569df       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              15 minutes ago      Running             registry-proxy                           0                   ab80df45e204e       registry-proxy-2dgr9
	70d635c9d667c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     16 minutes ago      Running             amd-gpu-device-plugin                    0                   3d37e16d91d2b       amd-gpu-device-plugin-g5z6w
	f3766ac202b89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             16 minutes ago      Running             storage-provisioner                      0                   97a7ca87e0fdb       storage-provisioner
	5aadabb8b1bfc       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                                                             17 minutes ago      Running             coredns                                  0                   78956e77203cb       coredns-674b8bbfcf-gcxhf
	f10061ba824c0       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                                                             17 minutes ago      Running             kube-proxy                               0                   b60868a950e81       kube-proxy-cm28f
	ccc99095a0e73       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e                                                                             17 minutes ago      Running             kube-apiserver                           0                   3b49e7f986574       kube-apiserver-addons-301682
	b4d0fe15b4640       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                                                             17 minutes ago      Running             kube-controller-manager                  0                   793d3507bd395       kube-controller-manager-addons-301682
	a117b554832ef       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                                                             17 minutes ago      Running             etcd                                     0                   ecf8d198683c7       etcd-addons-301682
	4e556fe1e25cc       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                                                             17 minutes ago      Running             kube-scheduler                           0                   d882c0c670fce       kube-scheduler-addons-301682
	
	
	==> coredns [5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2] <==
	[INFO] 10.244.0.7:50398 - 21272 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00015723s
	[INFO] 10.244.0.7:33981 - 63743 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000153172s
	[INFO] 10.244.0.7:33981 - 4559 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000134163s
	[INFO] 10.244.0.7:33981 - 61646 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000127698s
	[INFO] 10.244.0.7:33981 - 64510 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000069185s
	[INFO] 10.244.0.7:33981 - 28902 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000100365s
	[INFO] 10.244.0.7:33981 - 15014 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00007933s
	[INFO] 10.244.0.7:33981 - 33027 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000105351s
	[INFO] 10.244.0.7:33981 - 47665 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000112233s
	[INFO] 10.244.0.7:55119 - 6784 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000247403s
	[INFO] 10.244.0.7:55119 - 47925 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000419779s
	[INFO] 10.244.0.7:55119 - 45848 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000089539s
	[INFO] 10.244.0.7:55119 - 23693 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000259422s
	[INFO] 10.244.0.7:55119 - 9441 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000097016s
	[INFO] 10.244.0.7:55119 - 52894 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000190346s
	[INFO] 10.244.0.7:55119 - 32241 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000110754s
	[INFO] 10.244.0.7:55119 - 19655 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000186019s
	[INFO] 10.244.0.7:36001 - 28184 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000231568s
	[INFO] 10.244.0.7:36001 - 1678 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000334015s
	[INFO] 10.244.0.7:36001 - 19550 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000076823s
	[INFO] 10.244.0.7:36001 - 676 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000068907s
	[INFO] 10.244.0.7:36001 - 12649 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000065642s
	[INFO] 10.244.0.7:36001 - 24776 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076682s
	[INFO] 10.244.0.7:36001 - 10007 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000073424s
	[INFO] 10.244.0.7:36001 - 27218 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000068629s
	
	
	==> describe nodes <==
	Name:               addons-301682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-301682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=addons-301682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-301682
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-301682"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-301682
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:36:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:32:21 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:32:21 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:32:21 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:32:21 +0000   Mon, 30 Jun 2025 14:19:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    addons-301682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3f7748b45e54c5d95a766f7ac118097
	  System UUID:                c3f7748b-45e5-4c5d-95a7-66f7ac118097
	  Boot ID:                    4dcad91c-eb4d-46c9-ae52-10be6c00fd59
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     task-pv-pod                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 amd-gpu-device-plugin-g5z6w              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-674b8bbfcf-gcxhf                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     17m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpathplugin-h4qg2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-addons-301682                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         17m
	  kube-system                 kube-apiserver-addons-301682             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-301682    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-cm28f                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-301682             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 registry-694bd45846-x8cnn                0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 registry-proxy-2dgr9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-68b874b76f-m97pd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-68b874b76f-zvnk2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeReady                17m                kubelet          Node addons-301682 status is now: NodeReady
	  Normal  RegisteredNode           17m                node-controller  Node addons-301682 event: Registered Node addons-301682 in Controller
	
	
	==> dmesg <==
	[  +3.981178] kauditd_printk_skb: 99 callbacks suppressed
	[ +14.133007] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.888041] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 14:20] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.101498] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:21] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.564016] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000063] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.018820] kauditd_printk_skb: 4 callbacks suppressed
	[Jun30 14:22] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.468740] kauditd_printk_skb: 33 callbacks suppressed
	[Jun30 14:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.720029] kauditd_printk_skb: 37 callbacks suppressed
	[Jun30 14:25] kauditd_printk_skb: 33 callbacks suppressed
	[  +3.578772] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.590938] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.177192] kauditd_printk_skb: 20 callbacks suppressed
	[Jun30 14:26] kauditd_printk_skb: 4 callbacks suppressed
	[ +46.460054] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:27] kauditd_printk_skb: 2 callbacks suppressed
	[ +35.275184] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:29] kauditd_printk_skb: 9 callbacks suppressed
	[ +22.041327] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:30] kauditd_printk_skb: 2 callbacks suppressed
	[Jun30 14:31] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb] <==
	{"level":"info","ts":"2025-06-30T14:21:16.256094Z","caller":"traceutil/trace.go:171","msg":"trace[752785918] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"419.629539ms","start":"2025-06-30T14:21:15.836340Z","end":"2025-06-30T14:21:16.255969Z","steps":["trace[752785918] 'process raft request'  (duration: 416.770167ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:21:16.256259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:21:15.836292Z","time spent":"419.882706ms","remote":"127.0.0.1:55816","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1189 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-06-30T14:22:57.074171Z","caller":"traceutil/trace.go:171","msg":"trace[97580462] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"235.032412ms","start":"2025-06-30T14:22:56.839110Z","end":"2025-06-30T14:22:57.074143Z","steps":["trace[97580462] 'process raft request'  (duration: 234.613297ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.462692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650406Z","caller":"traceutil/trace.go:171","msg":"trace[1036457483] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"155.081366ms","start":"2025-06-30T14:22:59.495275Z","end":"2025-06-30T14:22:59.650356Z","steps":["trace[1036457483] 'range keys from in-memory index tree'  (duration: 154.411147ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:22:59.650586Z","caller":"traceutil/trace.go:171","msg":"trace[806257844] transaction","detail":"{read_only:false; response_revision:1386; number_of_response:1; }","duration":"115.895314ms","start":"2025-06-30T14:22:59.534680Z","end":"2025-06-30T14:22:59.650576Z","steps":["trace[806257844] 'process raft request'  (duration: 113.707335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"485.393683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650888Z","caller":"traceutil/trace.go:171","msg":"trace[707366630] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1385; }","duration":"486.585604ms","start":"2025-06-30T14:22:59.164295Z","end":"2025-06-30T14:22:59.650881Z","steps":["trace[707366630] 'range keys from in-memory index tree'  (duration: 485.334873ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.650922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.164282Z","time spent":"486.621786ms","remote":"127.0.0.1:55612","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"374.09899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651010Z","caller":"traceutil/trace.go:171","msg":"trace[926388769] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"375.285797ms","start":"2025-06-30T14:22:59.275719Z","end":"2025-06-30T14:22:59.651005Z","steps":["trace[926388769] 'range keys from in-memory index tree'  (duration: 374.055569ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.275706Z","time spent":"375.316283ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.573265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651095Z","caller":"traceutil/trace.go:171","msg":"trace[444156936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"374.826279ms","start":"2025-06-30T14:22:59.276264Z","end":"2025-06-30T14:22:59.651090Z","steps":["trace[444156936] 'range keys from in-memory index tree'  (duration: 373.54342ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.276255Z","time spent":"374.850773ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.221471ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651162Z","caller":"traceutil/trace.go:171","msg":"trace[72079455] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1385; }","duration":"136.411789ms","start":"2025-06-30T14:22:59.514744Z","end":"2025-06-30T14:22:59.651156Z","steps":["trace[72079455] 'range keys from in-memory index tree'  (duration: 135.196228ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:25:50.156282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.241875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-06-30T14:25:50.156408Z","caller":"traceutil/trace.go:171","msg":"trace[1656189336] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1889; }","duration":"105.429353ms","start":"2025-06-30T14:25:50.050958Z","end":"2025-06-30T14:25:50.156387Z","steps":["trace[1656189336] 'range keys from in-memory index tree'  (duration: 105.167742ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:28:59.297152Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1538}
	{"level":"info","ts":"2025-06-30T14:28:59.333481Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1538,"took":"35.184312ms","hash":3459685430,"current-db-size-bytes":7704576,"current-db-size":"7.7 MB","current-db-size-in-use-bytes":4759552,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2025-06-30T14:28:59.333691Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3459685430,"revision":1538,"compact-revision":-1}
	{"level":"info","ts":"2025-06-30T14:33:59.305379Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2294}
	{"level":"info","ts":"2025-06-30T14:33:59.329509Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":2294,"took":"23.475198ms","hash":2034814259,"current-db-size-bytes":7704576,"current-db-size":"7.7 MB","current-db-size-in-use-bytes":4259840,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2025-06-30T14:33:59.329599Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2034814259,"revision":2294,"compact-revision":1538}
	
	
	==> kernel <==
	 14:36:13 up 17 min,  0 users,  load average: 0.09, 0.30, 0.44
	Linux addons-301682 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a] <==
	E0630 14:20:30.566598       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.249.255:443: connect: connection refused" logger="UnhandledError"
	W0630 14:20:30.568692       1 handler_proxy.go:99] no RequestInfo found in the context
	E0630 14:20:30.568788       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0630 14:20:30.592794       1 handler.go:288] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0630 14:20:30.602722       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0630 14:25:32.039384       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43658: use of closed network connection
	E0630 14:25:32.235328       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43690: use of closed network connection
	I0630 14:25:35.327796       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:40.911437       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0630 14:25:41.137079       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.71.181"}
	I0630 14:25:41.142822       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:41.721263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.215.125"}
	I0630 14:25:47.346218       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:26:31.606219       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0630 14:27:03.338971       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:27:51.135976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:00.946999       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:59.400677       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:31:51.314031       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0630 14:31:52.350724       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0630 14:33:45.291375       1 watch.go:278] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0630 14:33:45.909205       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0] <==
	I0630 14:32:08.623346       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0630 14:32:08.623448       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0630 14:32:12.901188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:32:22.822130       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:32:30.243428       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:32:37.822383       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:32:52.823284       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:33:03.875501       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:33:07.823610       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:33:22.824630       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:33:37.825423       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:33:49.613346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:33:52.826240       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	I0630 14:33:56.044104       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	E0630 14:34:07.826722       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:34:22.827142       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:34:22.960245       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:34:37.827221       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:34:52.828060       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:35:07.828208       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:35:19.936803       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:35:22.828681       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:35:37.829659       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:35:52.830066       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:36:07.830776       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b] <==
	E0630 14:19:09.616075       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:19:09.628197       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0630 14:19:09.628280       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:19:09.728584       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:19:09.728641       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:19:09.728663       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:19:09.760004       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:19:09.760419       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:19:09.760431       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:19:09.761800       1 config.go:199] "Starting service config controller"
	I0630 14:19:09.761820       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:19:09.764743       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:19:09.764796       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:19:09.764830       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:19:09.764834       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:19:09.770113       1 config.go:329] "Starting node config controller"
	I0630 14:19:09.770142       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:19:09.862889       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:19:09.865227       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:19:09.865265       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:19:09.870697       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627] <==
	E0630 14:19:00.996185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:19:00.996326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:19:00.996316       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:00.996403       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:00.996618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:00.998826       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:00.999006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:01.002700       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.002834       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:01.865362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:01.884714       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.908759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:01.937379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:19:01.938367       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:01.983087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:19:02.032891       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:19:02.058487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:19:02.131893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:02.191157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:02.310584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:02.326588       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:02.381605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0630 14:19:04.769814       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:35:24 addons-301682 kubelet[1543]: E0630 14:35:24.697050    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:35:26 addons-301682 kubelet[1543]: E0630 14:35:26.695708    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="32226795-7a22-4935-b60c-8553d2716e86"
	Jun 30 14:35:34 addons-301682 kubelet[1543]: E0630 14:35:34.212885    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294134212483272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:35:34 addons-301682 kubelet[1543]: E0630 14:35:34.212931    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294134212483272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:35:36 addons-301682 kubelet[1543]: I0630 14:35:36.695843    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-x8cnn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:35:36 addons-301682 kubelet[1543]: E0630 14:35:36.698086    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:35:36 addons-301682 kubelet[1543]: E0630 14:35:36.698366    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a7647f82-c5fc-422d-8b99-fe25edb95f59"
	Jun 30 14:35:44 addons-301682 kubelet[1543]: E0630 14:35:44.217375    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294144216709666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:35:44 addons-301682 kubelet[1543]: E0630 14:35:44.217423    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294144216709666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:35:44 addons-301682 kubelet[1543]: I0630 14:35:44.695943    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:35:47 addons-301682 kubelet[1543]: I0630 14:35:47.695876    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-x8cnn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:35:47 addons-301682 kubelet[1543]: E0630 14:35:47.698401    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:35:47 addons-301682 kubelet[1543]: E0630 14:35:47.698741    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a7647f82-c5fc-422d-8b99-fe25edb95f59"
	Jun 30 14:35:54 addons-301682 kubelet[1543]: E0630 14:35:54.220020    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294154219691895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:35:54 addons-301682 kubelet[1543]: E0630 14:35:54.220316    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294154219691895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:36:00 addons-301682 kubelet[1543]: I0630 14:36:00.695196    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-x8cnn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:36:00 addons-301682 kubelet[1543]: E0630 14:36:00.696466    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a7647f82-c5fc-422d-8b99-fe25edb95f59"
	Jun 30 14:36:00 addons-301682 kubelet[1543]: E0630 14:36:00.697458    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:36:04 addons-301682 kubelet[1543]: E0630 14:36:04.223140    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294164222711747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:36:04 addons-301682 kubelet[1543]: E0630 14:36:04.223498    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294164222711747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:36:06 addons-301682 kubelet[1543]: I0630 14:36:06.695157    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-g5z6w" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:36:12 addons-301682 kubelet[1543]: E0630 14:36:12.503799    1543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:36:12 addons-301682 kubelet[1543]: E0630 14:36:12.503933    1543 kuberuntime_image.go:42] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:36:12 addons-301682 kubelet[1543]: E0630 14:36:12.504233    1543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcnmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(32226795-7a22-4935-b60c-8553d2716e86): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:36:12 addons-301682 kubelet[1543]: E0630 14:36:12.506722    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="32226795-7a22-4935-b60c-8553d2716e86"
	
	
	==> storage-provisioner [f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4] <==
	W0630 14:35:49.357761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:51.361365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:51.366827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:53.370390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:53.376471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:55.380510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:55.385958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:57.388943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:57.394197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:59.397865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:35:59.403945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:01.407021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:01.415742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:03.418480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:03.423671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:05.428759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:05.434920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:07.437905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:07.443744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:09.446761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:09.454758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:11.457891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:11.462992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:13.467321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:36:13.473451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301682 -n addons-301682
helpers_test.go:261: (dbg) Run:  kubectl --context addons-301682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path registry-694bd45846-x8cnn
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path registry-694bd45846-x8cnn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path registry-694bd45846-x8cnn: exit status 1 (82.749115ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301682/192.168.39.227
	Start Time:       Mon, 30 Jun 2025 14:25:41 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9gdz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9gdz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/nginx to addons-301682
	  Warning  Failed     9m24s                 kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m30s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     118s (x5 over 9m24s)  kubelet            Error: ErrImagePull
	  Warning  Failed     118s (x4 over 8m15s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     27s (x16 over 9m23s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    14s (x17 over 9m23s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301682/192.168.39.227
	Start Time:       Mon, 30 Jun 2025 14:30:11 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcnmb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-jcnmb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-301682
	  Warning  Failed     86s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    48s (x5 over 4m55s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     48s (x5 over 4m55s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    34s (x4 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2s (x3 over 4m56s)   kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2s (x4 over 4m56s)   kubelet            Error: ErrImagePull
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6l844 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-6l844:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-694bd45846-x8cnn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path registry-694bd45846-x8cnn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable csi-hostpath-driver --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/CSI (376.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-301682 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-301682 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-301682 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.389µs)
helpers_test.go:396: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-301682 -n addons-301682
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 logs -n 25: (1.533398406s)
helpers_test.go:252: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:17 UTC |                     |
	|         | -p download-only-777401              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | -o=json --download-only              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | -p download-only-781147              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | --download-only -p                   | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | binary-mirror-095233                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44619               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-095233              | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| addons  | disable dashboard -p                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| start   | -p addons-301682 --wait=true         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:25 UTC |
	|         | --memory=4096 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=registry-creds              |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | -p addons-301682                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:29 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | configure registry-creds -f          | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | ./testdata/addons_testconfig.json    |                      |         |         |                     |                     |
	|         | -p addons-301682                     |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | disable registry-creds               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:31 UTC | 30 Jun 25 14:31 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:18:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:18:18.914659 1558425 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:18:18.914940 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.914950 1558425 out.go:358] Setting ErrFile to fd 2...
	I0630 14:18:18.914954 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.915163 1558425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:18:18.915795 1558425 out.go:352] Setting JSON to false
	I0630 14:18:18.916730 1558425 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":28791,"bootTime":1751264308,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:18:18.916865 1558425 start.go:140] virtualization: kvm guest
	I0630 14:18:18.918804 1558425 out.go:177] * [addons-301682] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:18:18.920591 1558425 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:18:18.920596 1558425 notify.go:220] Checking for updates...
	I0630 14:18:18.923430 1558425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:18:18.924993 1558425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:18:18.926449 1558425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:18.927916 1558425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:18:18.929158 1558425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:18:18.930609 1558425 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:18:18.965828 1558425 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 14:18:18.967229 1558425 start.go:304] selected driver: kvm2
	I0630 14:18:18.967249 1558425 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:18:18.967260 1558425 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:18:18.968055 1558425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.968161 1558425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:18:18.984884 1558425 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:18:18.984967 1558425 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:18:18.985269 1558425 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:18:18.985311 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:18.985360 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:18.985373 1558425 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:18:18.985492 1558425 start.go:347] cluster config:
	{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0630 14:18:18.985616 1558425 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.987784 1558425 out.go:177] * Starting "addons-301682" primary control-plane node in "addons-301682" cluster
	I0630 14:18:18.989175 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:18.989236 1558425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 14:18:18.989252 1558425 cache.go:56] Caching tarball of preloaded images
	I0630 14:18:18.989351 1558425 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 14:18:18.989366 1558425 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 14:18:18.989808 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:18.989840 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json: {Name:mk0b97369f17da476cd2a8393ae45d3ce84c94a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:18.990016 1558425 start.go:360] acquireMachinesLock for addons-301682: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 14:18:18.990075 1558425 start.go:364] duration metric: took 40.808µs to acquireMachinesLock for "addons-301682"
	I0630 14:18:18.990091 1558425 start.go:93] Provisioning new machine with config: &{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:18:18.990156 1558425 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 14:18:18.992039 1558425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0630 14:18:18.992210 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:18:18.992268 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:18:19.009360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0630 14:18:19.009944 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:18:19.010513 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:18:19.010538 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:18:19.010965 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:18:19.011233 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:19.011437 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:19.011652 1558425 start.go:159] libmachine.API.Create for "addons-301682" (driver="kvm2")
	I0630 14:18:19.011686 1558425 client.go:168] LocalClient.Create starting
	I0630 14:18:19.011737 1558425 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 14:18:19.156936 1558425 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 14:18:19.413430 1558425 main.go:141] libmachine: Running pre-create checks...
	I0630 14:18:19.413459 1558425 main.go:141] libmachine: (addons-301682) Calling .PreCreateCheck
	I0630 14:18:19.414009 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:19.414492 1558425 main.go:141] libmachine: Creating machine...
	I0630 14:18:19.414509 1558425 main.go:141] libmachine: (addons-301682) Calling .Create
	I0630 14:18:19.414658 1558425 main.go:141] libmachine: (addons-301682) creating KVM machine...
	I0630 14:18:19.414680 1558425 main.go:141] libmachine: (addons-301682) creating network...
	I0630 14:18:19.416107 1558425 main.go:141] libmachine: (addons-301682) DBG | found existing default KVM network
	I0630 14:18:19.416967 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.416813 1558447 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0630 14:18:19.417027 1558425 main.go:141] libmachine: (addons-301682) DBG | created network xml: 
	I0630 14:18:19.417047 1558425 main.go:141] libmachine: (addons-301682) DBG | <network>
	I0630 14:18:19.417058 1558425 main.go:141] libmachine: (addons-301682) DBG |   <name>mk-addons-301682</name>
	I0630 14:18:19.417065 1558425 main.go:141] libmachine: (addons-301682) DBG |   <dns enable='no'/>
	I0630 14:18:19.417074 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417083 1558425 main.go:141] libmachine: (addons-301682) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 14:18:19.417095 1558425 main.go:141] libmachine: (addons-301682) DBG |     <dhcp>
	I0630 14:18:19.417105 1558425 main.go:141] libmachine: (addons-301682) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 14:18:19.417114 1558425 main.go:141] libmachine: (addons-301682) DBG |     </dhcp>
	I0630 14:18:19.417134 1558425 main.go:141] libmachine: (addons-301682) DBG |   </ip>
	I0630 14:18:19.417161 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417196 1558425 main.go:141] libmachine: (addons-301682) DBG | </network>
	I0630 14:18:19.417211 1558425 main.go:141] libmachine: (addons-301682) DBG | 
	I0630 14:18:19.422966 1558425 main.go:141] libmachine: (addons-301682) DBG | trying to create private KVM network mk-addons-301682 192.168.39.0/24...
	I0630 14:18:19.504039 1558425 main.go:141] libmachine: (addons-301682) DBG | private KVM network mk-addons-301682 192.168.39.0/24 created
	I0630 14:18:19.504091 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.503994 1558447 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.504105 1558425 main.go:141] libmachine: (addons-301682) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.504121 1558425 main.go:141] libmachine: (addons-301682) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:18:19.504170 1558425 main.go:141] libmachine: (addons-301682) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 14:18:19.852642 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.852518 1558447 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa...
	I0630 14:18:19.994685 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994513 1558447 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk...
	I0630 14:18:19.994718 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing magic tar header
	I0630 14:18:19.994732 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing SSH key tar header
	I0630 14:18:19.994739 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994653 1558447 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.994842 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682
	I0630 14:18:19.994876 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 14:18:19.994890 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 (perms=drwx------)
	I0630 14:18:19.994904 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 14:18:19.994914 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 14:18:19.994928 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 14:18:19.994937 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 14:18:19.994950 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 14:18:19.994964 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.994974 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:19.994989 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 14:18:19.994999 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 14:18:19.995008 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins
	I0630 14:18:19.995017 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home
	I0630 14:18:19.995028 1558425 main.go:141] libmachine: (addons-301682) DBG | skipping /home - not owner
	I0630 14:18:19.996388 1558425 main.go:141] libmachine: (addons-301682) define libvirt domain using xml: 
	I0630 14:18:19.996417 1558425 main.go:141] libmachine: (addons-301682) <domain type='kvm'>
	I0630 14:18:19.996424 1558425 main.go:141] libmachine: (addons-301682)   <name>addons-301682</name>
	I0630 14:18:19.996429 1558425 main.go:141] libmachine: (addons-301682)   <memory unit='MiB'>4096</memory>
	I0630 14:18:19.996434 1558425 main.go:141] libmachine: (addons-301682)   <vcpu>2</vcpu>
	I0630 14:18:19.996437 1558425 main.go:141] libmachine: (addons-301682)   <features>
	I0630 14:18:19.996441 1558425 main.go:141] libmachine: (addons-301682)     <acpi/>
	I0630 14:18:19.996445 1558425 main.go:141] libmachine: (addons-301682)     <apic/>
	I0630 14:18:19.996450 1558425 main.go:141] libmachine: (addons-301682)     <pae/>
	I0630 14:18:19.996454 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996496 1558425 main.go:141] libmachine: (addons-301682)   </features>
	I0630 14:18:19.996523 1558425 main.go:141] libmachine: (addons-301682)   <cpu mode='host-passthrough'>
	I0630 14:18:19.996559 1558425 main.go:141] libmachine: (addons-301682)   
	I0630 14:18:19.996579 1558425 main.go:141] libmachine: (addons-301682)   </cpu>
	I0630 14:18:19.996596 1558425 main.go:141] libmachine: (addons-301682)   <os>
	I0630 14:18:19.996607 1558425 main.go:141] libmachine: (addons-301682)     <type>hvm</type>
	I0630 14:18:19.996615 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='cdrom'/>
	I0630 14:18:19.996623 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='hd'/>
	I0630 14:18:19.996628 1558425 main.go:141] libmachine: (addons-301682)     <bootmenu enable='no'/>
	I0630 14:18:19.996634 1558425 main.go:141] libmachine: (addons-301682)   </os>
	I0630 14:18:19.996639 1558425 main.go:141] libmachine: (addons-301682)   <devices>
	I0630 14:18:19.996646 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='cdrom'>
	I0630 14:18:19.996654 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/boot2docker.iso'/>
	I0630 14:18:19.996661 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hdc' bus='scsi'/>
	I0630 14:18:19.996666 1558425 main.go:141] libmachine: (addons-301682)       <readonly/>
	I0630 14:18:19.996672 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996677 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='disk'>
	I0630 14:18:19.996687 1558425 main.go:141] libmachine: (addons-301682)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 14:18:19.996710 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk'/>
	I0630 14:18:19.996729 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hda' bus='virtio'/>
	I0630 14:18:19.996742 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996753 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996766 1558425 main.go:141] libmachine: (addons-301682)       <source network='mk-addons-301682'/>
	I0630 14:18:19.996777 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996786 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996796 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996808 1558425 main.go:141] libmachine: (addons-301682)       <source network='default'/>
	I0630 14:18:19.996821 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996847 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996868 1558425 main.go:141] libmachine: (addons-301682)     <serial type='pty'>
	I0630 14:18:19.996884 1558425 main.go:141] libmachine: (addons-301682)       <target port='0'/>
	I0630 14:18:19.996899 1558425 main.go:141] libmachine: (addons-301682)     </serial>
	I0630 14:18:19.996909 1558425 main.go:141] libmachine: (addons-301682)     <console type='pty'>
	I0630 14:18:19.996918 1558425 main.go:141] libmachine: (addons-301682)       <target type='serial' port='0'/>
	I0630 14:18:19.996928 1558425 main.go:141] libmachine: (addons-301682)     </console>
	I0630 14:18:19.996938 1558425 main.go:141] libmachine: (addons-301682)     <rng model='virtio'>
	I0630 14:18:19.996951 1558425 main.go:141] libmachine: (addons-301682)       <backend model='random'>/dev/random</backend>
	I0630 14:18:19.996962 1558425 main.go:141] libmachine: (addons-301682)     </rng>
	I0630 14:18:19.996969 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996980 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996990 1558425 main.go:141] libmachine: (addons-301682)   </devices>
	I0630 14:18:19.997056 1558425 main.go:141] libmachine: (addons-301682) </domain>
	I0630 14:18:19.997083 1558425 main.go:141] libmachine: (addons-301682) 
	I0630 14:18:20.002436 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:4a:da:84 in network default
	I0630 14:18:20.002966 1558425 main.go:141] libmachine: (addons-301682) starting domain...
	I0630 14:18:20.002981 1558425 main.go:141] libmachine: (addons-301682) ensuring networks are active...
	I0630 14:18:20.002988 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:20.003928 1558425 main.go:141] libmachine: (addons-301682) Ensuring network default is active
	I0630 14:18:20.004377 1558425 main.go:141] libmachine: (addons-301682) Ensuring network mk-addons-301682 is active
	I0630 14:18:20.004924 1558425 main.go:141] libmachine: (addons-301682) getting domain XML...
	I0630 14:18:20.006331 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:21.490289 1558425 main.go:141] libmachine: (addons-301682) waiting for IP...
	I0630 14:18:21.491154 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.491628 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.491677 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.491627 1558447 retry.go:31] will retry after 227.981696ms: waiting for domain to come up
	I0630 14:18:21.721263 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.721780 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.721803 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.721737 1558447 retry.go:31] will retry after 379.046975ms: waiting for domain to come up
	I0630 14:18:22.102468 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.102921 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.102946 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.102870 1558447 retry.go:31] will retry after 342.349164ms: waiting for domain to come up
	I0630 14:18:22.446573 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.446984 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.447028 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.446972 1558447 retry.go:31] will retry after 471.24813ms: waiting for domain to come up
	I0630 14:18:22.920211 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.920789 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.920882 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.920792 1558447 retry.go:31] will retry after 708.674729ms: waiting for domain to come up
	I0630 14:18:23.631552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:23.632140 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:23.632158 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:23.632083 1558447 retry.go:31] will retry after 832.667186ms: waiting for domain to come up
	I0630 14:18:24.466597 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:24.467128 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:24.467188 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:24.467084 1558447 retry.go:31] will retry after 1.046318752s: waiting for domain to come up
	I0630 14:18:25.514952 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:25.515439 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:25.515467 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:25.515417 1558447 retry.go:31] will retry after 1.194063503s: waiting for domain to come up
	I0630 14:18:26.712109 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:26.712668 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:26.712736 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:26.712627 1558447 retry.go:31] will retry after 1.248422127s: waiting for domain to come up
	I0630 14:18:27.962423 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:27.962871 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:27.962904 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:27.962823 1558447 retry.go:31] will retry after 2.035519816s: waiting for domain to come up
	I0630 14:18:29.999626 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:30.000023 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:30.000122 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:30.000029 1558447 retry.go:31] will retry after 2.163487066s: waiting for domain to come up
	I0630 14:18:32.164834 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:32.165260 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:32.165289 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:32.165193 1558447 retry.go:31] will retry after 2.715279658s: waiting for domain to come up
	I0630 14:18:34.882095 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:34.882613 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:34.882651 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:34.882566 1558447 retry.go:31] will retry after 4.101409574s: waiting for domain to come up
	I0630 14:18:38.986670 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:38.987057 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:38.987115 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:38.987021 1558447 retry.go:31] will retry after 4.770477957s: waiting for domain to come up
	I0630 14:18:43.763775 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764289 1558425 main.go:141] libmachine: (addons-301682) found domain IP: 192.168.39.227
	I0630 14:18:43.764317 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has current primary IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764323 1558425 main.go:141] libmachine: (addons-301682) reserving static IP address...
	I0630 14:18:43.764708 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find host DHCP lease matching {name: "addons-301682", mac: "52:54:00:83:16:36", ip: "192.168.39.227"} in network mk-addons-301682
	I0630 14:18:43.852639 1558425 main.go:141] libmachine: (addons-301682) reserved static IP address 192.168.39.227 for domain addons-301682
	I0630 14:18:43.852672 1558425 main.go:141] libmachine: (addons-301682) DBG | Getting to WaitForSSH function...
	I0630 14:18:43.852679 1558425 main.go:141] libmachine: (addons-301682) waiting for SSH...
	I0630 14:18:43.855466 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855863 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.855913 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855970 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH client type: external
	I0630 14:18:43.856034 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa (-rw-------)
	I0630 14:18:43.856089 1558425 main.go:141] libmachine: (addons-301682) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 14:18:43.856119 1558425 main.go:141] libmachine: (addons-301682) DBG | About to run SSH command:
	I0630 14:18:43.856137 1558425 main.go:141] libmachine: (addons-301682) DBG | exit 0
	I0630 14:18:43.981627 1558425 main.go:141] libmachine: (addons-301682) DBG | SSH cmd err, output: <nil>: 
	I0630 14:18:43.981928 1558425 main.go:141] libmachine: (addons-301682) KVM machine creation complete
	I0630 14:18:43.982338 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:43.982966 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983226 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983462 1558425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 14:18:43.983477 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:18:43.984862 1558425 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 14:18:43.984878 1558425 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 14:18:43.984885 1558425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 14:18:43.984892 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:43.987532 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.987932 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.987959 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.988068 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:43.988288 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988434 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988572 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:43.988711 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:43.988940 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:43.988950 1558425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 14:18:44.093060 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.093094 1558425 main.go:141] libmachine: Detecting the provisioner...
	I0630 14:18:44.093103 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.096339 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096697 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.096721 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096934 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.097182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097449 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097610 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.097843 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.098060 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.098080 1558425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 14:18:44.202824 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 14:18:44.202946 1558425 main.go:141] libmachine: found compatible host: buildroot
	I0630 14:18:44.202959 1558425 main.go:141] libmachine: Provisioning with buildroot...
	I0630 14:18:44.202967 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203257 1558425 buildroot.go:166] provisioning hostname "addons-301682"
	I0630 14:18:44.203283 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.206655 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.206965 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.206989 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.207261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.207476 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207654 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207765 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.207928 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.208172 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.208189 1558425 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-301682 && echo "addons-301682" | sudo tee /etc/hostname
	I0630 14:18:44.326076 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-301682
	
	I0630 14:18:44.326120 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.329781 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330236 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.330271 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330493 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.330780 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331000 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331147 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.331319 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.331561 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.331583 1558425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-301682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-301682/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-301682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 14:18:44.442815 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.442853 1558425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 14:18:44.442872 1558425 buildroot.go:174] setting up certificates
	I0630 14:18:44.442886 1558425 provision.go:84] configureAuth start
	I0630 14:18:44.442963 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.443427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:44.446591 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447120 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.447146 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447411 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.449967 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450292 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.450314 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450474 1558425 provision.go:143] copyHostCerts
	I0630 14:18:44.450577 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 14:18:44.450730 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 14:18:44.450832 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 14:18:44.450922 1558425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.addons-301682 san=[127.0.0.1 192.168.39.227 addons-301682 localhost minikube]
	I0630 14:18:44.669777 1558425 provision.go:177] copyRemoteCerts
	I0630 14:18:44.669866 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 14:18:44.669906 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.673124 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673495 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.673530 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673760 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.674080 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.674291 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.674517 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:44.758379 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 14:18:44.788885 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 14:18:44.817666 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 14:18:44.847039 1558425 provision.go:87] duration metric: took 404.122435ms to configureAuth
	I0630 14:18:44.847076 1558425 buildroot.go:189] setting minikube options for container-runtime
	I0630 14:18:44.847582 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:18:44.847720 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.850359 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.850971 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.850998 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.851240 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.851500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851706 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851871 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.852084 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.852306 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.852322 1558425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 14:18:45.094141 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 14:18:45.094172 1558425 main.go:141] libmachine: Checking connection to Docker...
	I0630 14:18:45.094182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetURL
	I0630 14:18:45.095525 1558425 main.go:141] libmachine: (addons-301682) DBG | using libvirt version 6000000
	I0630 14:18:45.097995 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098457 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.098484 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098973 1558425 main.go:141] libmachine: Docker is up and running!
	I0630 14:18:45.098988 1558425 main.go:141] libmachine: Reticulating splines...
	I0630 14:18:45.098996 1558425 client.go:171] duration metric: took 26.087298039s to LocalClient.Create
	I0630 14:18:45.099029 1558425 start.go:167] duration metric: took 26.087375233s to libmachine.API.Create "addons-301682"
	I0630 14:18:45.099043 1558425 start.go:293] postStartSetup for "addons-301682" (driver="kvm2")
	I0630 14:18:45.099058 1558425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 14:18:45.099080 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.099385 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 14:18:45.099417 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.103070 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103476 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.103519 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.103974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.104154 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.104348 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.190062 1558425 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 14:18:45.194479 1558425 info.go:137] Remote host: Buildroot 2025.02
	I0630 14:18:45.194513 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 14:18:45.194584 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 14:18:45.194617 1558425 start.go:296] duration metric: took 95.564885ms for postStartSetup
	I0630 14:18:45.194655 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:45.195269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.198414 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.198916 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.198937 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.199225 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:45.199414 1558425 start.go:128] duration metric: took 26.209245344s to createHost
	I0630 14:18:45.199439 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.202677 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203657 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.203683 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203917 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.204167 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204389 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204594 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.204750 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:45.204952 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:45.204962 1558425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 14:18:45.310482 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751293125.283428942
	
	I0630 14:18:45.310513 1558425 fix.go:216] guest clock: 1751293125.283428942
	I0630 14:18:45.310540 1558425 fix.go:229] Guest: 2025-06-30 14:18:45.283428942 +0000 UTC Remote: 2025-06-30 14:18:45.199427216 +0000 UTC m=+26.326566099 (delta=84.001726ms)
	I0630 14:18:45.310570 1558425 fix.go:200] guest clock delta is within tolerance: 84.001726ms
	I0630 14:18:45.310578 1558425 start.go:83] releasing machines lock for "addons-301682", held for 26.320495243s
	I0630 14:18:45.310656 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.310928 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.313785 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314207 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.314241 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314506 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315123 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315340 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315461 1558425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 14:18:45.315505 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.315646 1558425 ssh_runner.go:195] Run: cat /version.json
	I0630 14:18:45.315683 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.318925 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319155 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319563 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319594 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319617 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319643 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319788 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.319877 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.320031 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320110 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320304 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320317 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320442 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.320501 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.399981 1558425 ssh_runner.go:195] Run: systemctl --version
	I0630 14:18:45.435607 1558425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 14:18:45.595593 1558425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 14:18:45.602291 1558425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 14:18:45.602374 1558425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 14:18:45.622229 1558425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 14:18:45.622263 1558425 start.go:495] detecting cgroup driver to use...
	I0630 14:18:45.622333 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 14:18:45.641226 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 14:18:45.658995 1558425 docker.go:230] disabling cri-docker service (if available) ...
	I0630 14:18:45.659074 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 14:18:45.675308 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 14:18:45.691780 1558425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 14:18:45.844773 1558425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 14:18:46.002067 1558425 docker.go:246] disabling docker service ...
	I0630 14:18:46.002163 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 14:18:46.018486 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 14:18:46.032711 1558425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 14:18:46.215507 1558425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 14:18:46.345437 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 14:18:46.361241 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 14:18:46.382182 1558425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 14:18:46.382265 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.393781 1558425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 14:18:46.393858 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.404879 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.415753 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.427101 1558425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 14:18:46.439585 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.450640 1558425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.469657 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.480995 1558425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 14:18:46.490960 1558425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 14:18:46.491038 1558425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 14:18:46.506162 1558425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 14:18:46.516885 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:46.649290 1558425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 14:18:46.754804 1558425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 14:18:46.754924 1558425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 14:18:46.760277 1558425 start.go:563] Will wait 60s for crictl version
	I0630 14:18:46.760374 1558425 ssh_runner.go:195] Run: which crictl
	I0630 14:18:46.764622 1558425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 14:18:46.806540 1558425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 14:18:46.806668 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.835571 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.870294 1558425 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 14:18:46.871793 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:46.874897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875281 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:46.875316 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875568 1558425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 14:18:46.880040 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:46.893844 1558425 kubeadm.go:875] updating cluster {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301
682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 14:18:46.894040 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:46.894098 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:46.928051 1558425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 14:18:46.928142 1558425 ssh_runner.go:195] Run: which lz4
	I0630 14:18:46.932106 1558425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 14:18:46.936459 1558425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 14:18:46.936498 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 14:18:48.250677 1558425 crio.go:462] duration metric: took 1.318609473s to copy over tarball
	I0630 14:18:48.250794 1558425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 14:18:50.229636 1558425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978807649s)
	I0630 14:18:50.229688 1558425 crio.go:469] duration metric: took 1.978978941s to extract the tarball
	I0630 14:18:50.229696 1558425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 14:18:50.268804 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:50.313787 1558425 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 14:18:50.313824 1558425 cache_images.go:84] Images are preloaded, skipping loading
	I0630 14:18:50.313836 1558425 kubeadm.go:926] updating node { 192.168.39.227 8443 v1.33.2 crio true true} ...
	I0630 14:18:50.313984 1558425 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-301682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 14:18:50.314108 1558425 ssh_runner.go:195] Run: crio config
	I0630 14:18:50.358762 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:50.358788 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:50.358799 1558425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 14:18:50.358821 1558425 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-301682 NodeName:addons-301682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 14:18:50.358985 1558425 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-301682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 14:18:50.359075 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 14:18:50.370269 1558425 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 14:18:50.370359 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 14:18:50.381422 1558425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0630 14:18:50.402864 1558425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 14:18:50.423535 1558425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0630 14:18:50.443802 1558425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0630 14:18:50.448073 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:50.462771 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:50.610565 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:18:50.641674 1558425 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682 for IP: 192.168.39.227
	I0630 14:18:50.641703 1558425 certs.go:194] generating shared ca certs ...
	I0630 14:18:50.641726 1558425 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.641917 1558425 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 14:18:50.775973 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt ...
	I0630 14:18:50.776127 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt: {Name:mk4a7e2f23df1877aa667a5fe9d149d87fa65b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776340 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key ...
	I0630 14:18:50.776353 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key: {Name:mkfe815a12ae8eded146419f42722ed747bb8cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776428 1558425 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 14:18:51.239699 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt ...
	I0630 14:18:51.239736 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt: {Name:mk010f91985630538e2436d654ff5b4cc759ab0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.239913 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key ...
	I0630 14:18:51.239969 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key: {Name:mk7a36f8a28748533897dd07634d8a5fe44a63a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.240059 1558425 certs.go:256] generating profile certs ...
	I0630 14:18:51.240131 1558425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key
	I0630 14:18:51.240150 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt with IP's: []
	I0630 14:18:51.635887 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt ...
	I0630 14:18:51.635927 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: {Name:mk22a67b2c0e90bc5dc67c34e330ee73fa799ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636119 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key ...
	I0630 14:18:51.636131 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key: {Name:mkbf3398b6d7cd5371d9a47d76e04eca4caef4d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636203 1558425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213
	I0630 14:18:51.636222 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I0630 14:18:52.292769 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 ...
	I0630 14:18:52.292809 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213: {Name:mk1402d3ac26fc5001a4011347c3552a378bda20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.292987 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 ...
	I0630 14:18:52.293001 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213: {Name:mkeaa6e21db5ae6cfb6b65c2ca90535340da5144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.293104 1558425 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt
	I0630 14:18:52.293196 1558425 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key
	I0630 14:18:52.293250 1558425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key
	I0630 14:18:52.293270 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt with IP's: []
	I0630 14:18:52.419123 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt ...
	I0630 14:18:52.419160 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt: {Name:mk3dd33047a5c3911a43a99bfac807aefa8e06f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419432 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key ...
	I0630 14:18:52.419460 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key: {Name:mk0d0b95d0dc825fc1e604461553530ed22a222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419680 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 14:18:52.419719 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 14:18:52.419744 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 14:18:52.419768 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 14:18:52.420585 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 14:18:52.463313 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 14:18:52.499004 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 14:18:52.526030 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 14:18:52.553220 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 14:18:52.581783 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 14:18:52.609656 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 14:18:52.639333 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0630 14:18:52.668789 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 14:18:52.696673 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 14:18:52.718151 1558425 ssh_runner.go:195] Run: openssl version
	I0630 14:18:52.724602 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 14:18:52.737181 1558425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742169 1558425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742231 1558425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.749342 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 14:18:52.762744 1558425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 14:18:52.768406 1558425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 14:18:52.768474 1558425 kubeadm.go:392] StartCluster: {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:18:52.768572 1558425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 14:18:52.768641 1558425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 14:18:52.812315 1558425 cri.go:89] found id: ""
	I0630 14:18:52.812437 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 14:18:52.824357 1558425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 14:18:52.837485 1558425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 14:18:52.850688 1558425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 14:18:52.850718 1558425 kubeadm.go:157] found existing configuration files:
	
	I0630 14:18:52.850770 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 14:18:52.862272 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 14:18:52.862353 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 14:18:52.874603 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 14:18:52.885384 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 14:18:52.885470 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 14:18:52.897341 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.908726 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 14:18:52.908791 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.920093 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 14:18:52.930423 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 14:18:52.930535 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 14:18:52.943582 1558425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 14:18:53.101493 1558425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 14:19:04.329808 1558425 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 14:19:04.329898 1558425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 14:19:04.330028 1558425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 14:19:04.330246 1558425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 14:19:04.330383 1558425 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 14:19:04.330478 1558425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 14:19:04.332630 1558425 out.go:235]   - Generating certificates and keys ...
	I0630 14:19:04.332731 1558425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 14:19:04.332810 1558425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 14:19:04.332905 1558425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 14:19:04.332972 1558425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 14:19:04.333024 1558425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 14:19:04.333069 1558425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 14:19:04.333119 1558425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 14:19:04.333250 1558425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333332 1558425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 14:19:04.333509 1558425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333623 1558425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 14:19:04.333739 1558425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 14:19:04.333816 1558425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 14:19:04.333868 1558425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 14:19:04.333909 1558425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 14:19:04.333955 1558425 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 14:19:04.334001 1558425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 14:19:04.334088 1558425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 14:19:04.334155 1558425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 14:19:04.334337 1558425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 14:19:04.334433 1558425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 14:19:04.336040 1558425 out.go:235]   - Booting up control plane ...
	I0630 14:19:04.336158 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 14:19:04.336225 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 14:19:04.336291 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 14:19:04.336387 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 14:19:04.336461 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 14:19:04.336498 1558425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 14:19:04.336705 1558425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 14:19:04.336826 1558425 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 14:19:04.336898 1558425 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001501258s
	I0630 14:19:04.336999 1558425 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 14:19:04.337079 1558425 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.227:8443/livez
	I0630 14:19:04.337160 1558425 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 14:19:04.337266 1558425 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 14:19:04.337343 1558425 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.200262885s
	I0630 14:19:04.337437 1558425 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.075387862s
	I0630 14:19:04.337541 1558425 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001441935s
	I0630 14:19:04.337665 1558425 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 14:19:04.337791 1558425 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 14:19:04.337843 1558425 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 14:19:04.338003 1558425 kubeadm.go:310] [mark-control-plane] Marking the node addons-301682 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 14:19:04.338066 1558425 kubeadm.go:310] [bootstrap-token] Using token: anrlv2.kitz2ouxhot5qn5d
	I0630 14:19:04.339966 1558425 out.go:235]   - Configuring RBAC rules ...
	I0630 14:19:04.340101 1558425 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 14:19:04.340226 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 14:19:04.340408 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 14:19:04.340552 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 14:19:04.340686 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 14:19:04.340806 1558425 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 14:19:04.340905 1558425 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 14:19:04.340944 1558425 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 14:19:04.340984 1558425 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 14:19:04.340990 1558425 kubeadm.go:310] 
	I0630 14:19:04.341040 1558425 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 14:19:04.341045 1558425 kubeadm.go:310] 
	I0630 14:19:04.341135 1558425 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 14:19:04.341142 1558425 kubeadm.go:310] 
	I0630 14:19:04.341172 1558425 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 14:19:04.341223 1558425 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 14:19:04.341270 1558425 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 14:19:04.341276 1558425 kubeadm.go:310] 
	I0630 14:19:04.341322 1558425 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 14:19:04.341328 1558425 kubeadm.go:310] 
	I0630 14:19:04.341449 1558425 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 14:19:04.341467 1558425 kubeadm.go:310] 
	I0630 14:19:04.341541 1558425 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 14:19:04.341643 1558425 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 14:19:04.341707 1558425 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 14:19:04.341712 1558425 kubeadm.go:310] 
	I0630 14:19:04.341781 1558425 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 14:19:04.341846 1558425 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 14:19:04.341851 1558425 kubeadm.go:310] 
	I0630 14:19:04.341924 1558425 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342019 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 14:19:04.342038 1558425 kubeadm.go:310] 	--control-plane 
	I0630 14:19:04.342043 1558425 kubeadm.go:310] 
	I0630 14:19:04.342140 1558425 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 14:19:04.342157 1558425 kubeadm.go:310] 
	I0630 14:19:04.342225 1558425 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342331 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 14:19:04.342344 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:19:04.342353 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:19:04.344305 1558425 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 14:19:04.345962 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 14:19:04.358944 1558425 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 14:19:04.382550 1558425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 14:19:04.382682 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:04.382684 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-301682 minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=addons-301682 minikube.k8s.io/primary=true
	I0630 14:19:04.443025 1558425 ops.go:34] apiserver oom_adj: -16
	I0630 14:19:04.557859 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.058710 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.558655 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.058095 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.558920 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.058903 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.558782 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.058045 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.134095 1558425 kubeadm.go:1105] duration metric: took 3.751500145s to wait for elevateKubeSystemPrivileges
	I0630 14:19:08.134146 1558425 kubeadm.go:394] duration metric: took 15.365674649s to StartCluster
	I0630 14:19:08.134169 1558425 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.134310 1558425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:19:08.134819 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.135078 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 14:19:08.135086 1558425 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:19:08.135172 1558425 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0630 14:19:08.135355 1558425 addons.go:69] Setting yakd=true in profile "addons-301682"
	I0630 14:19:08.135370 1558425 addons.go:69] Setting default-storageclass=true in profile "addons-301682"
	I0630 14:19:08.135401 1558425 addons.go:69] Setting ingress=true in profile "addons-301682"
	I0630 14:19:08.135408 1558425 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-301682"
	I0630 14:19:08.135419 1558425 addons.go:69] Setting ingress-dns=true in profile "addons-301682"
	I0630 14:19:08.135425 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-301682"
	I0630 14:19:08.135433 1558425 addons.go:238] Setting addon ingress-dns=true in "addons-301682"
	I0630 14:19:08.135450 1558425 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135439 1558425 addons.go:69] Setting cloud-spanner=true in profile "addons-301682"
	I0630 14:19:08.135466 1558425 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-301682"
	I0630 14:19:08.135453 1558425 addons.go:69] Setting registry-creds=true in profile "addons-301682"
	I0630 14:19:08.135470 1558425 addons.go:238] Setting addon cloud-spanner=true in "addons-301682"
	I0630 14:19:08.135482 1558425 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-301682"
	I0630 14:19:08.135488 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135499 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-301682"
	I0630 14:19:08.135507 1558425 addons.go:238] Setting addon registry-creds=true in "addons-301682"
	I0630 14:19:08.135508 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135522 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135532 1558425 addons.go:69] Setting volcano=true in profile "addons-301682"
	I0630 14:19:08.135553 1558425 addons.go:238] Setting addon volcano=true in "addons-301682"
	I0630 14:19:08.135560 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135601 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135968 1558425 addons.go:69] Setting storage-provisioner=true in profile "addons-301682"
	I0630 14:19:08.135968 1558425 addons.go:69] Setting volumesnapshots=true in profile "addons-301682"
	I0630 14:19:08.135383 1558425 addons.go:238] Setting addon yakd=true in "addons-301682"
	I0630 14:19:08.135985 1558425 addons.go:238] Setting addon storage-provisioner=true in "addons-301682"
	I0630 14:19:08.135986 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135992 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135999 1558425 addons.go:69] Setting metrics-server=true in profile "addons-301682"
	I0630 14:19:08.136001 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135468 1558425 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:08.136013 1558425 addons.go:238] Setting addon metrics-server=true in "addons-301682"
	I0630 14:19:08.136018 1558425 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136026 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136033 1558425 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-301682"
	I0630 14:19:08.136033 1558425 addons.go:69] Setting registry=true in profile "addons-301682"
	I0630 14:19:08.136037 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136042 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136046 1558425 addons.go:238] Setting addon registry=true in "addons-301682"
	I0630 14:19:08.136053 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136053 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136063 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136333 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136344 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135988 1558425 addons.go:238] Setting addon volumesnapshots=true in "addons-301682"
	I0630 14:19:08.136373 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136380 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135392 1558425 addons.go:69] Setting gcp-auth=true in profile "addons-301682"
	I0630 14:19:08.136406 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135413 1558425 addons.go:238] Setting addon ingress=true in "addons-301682"
	I0630 14:19:08.136410 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136430 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136437 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136439 1558425 mustload.go:65] Loading cluster: addons-301682
	I0630 14:19:08.135985 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136376 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136021 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136019 1558425 addons.go:69] Setting inspektor-gadget=true in profile "addons-301682"
	I0630 14:19:08.136533 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136408 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136571 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136399 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136594 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136043 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136654 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136035 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135386 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136802 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136830 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136538 1558425 addons.go:238] Setting addon inspektor-gadget=true in "addons-301682"
	I0630 14:19:08.136860 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136968 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.137006 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.141678 1558425 out.go:177] * Verifying Kubernetes components...
	I0630 14:19:08.143558 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:19:08.149915 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.149982 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.150069 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.150111 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.153357 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.153432 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.165614 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0630 14:19:08.165858 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0630 14:19:08.166745 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.166906 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.167573 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167595 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.167730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167744 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.168231 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168297 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.168851 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.168901 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.173235 1558425 addons.go:238] Setting addon default-storageclass=true in "addons-301682"
	I0630 14:19:08.173294 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.173724 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.173785 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.184456 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0630 14:19:08.185663 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.186359 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.186383 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.186868 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.187481 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.187524 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.198676 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0630 14:19:08.199720 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0630 14:19:08.200624 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.201056 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0630 14:19:08.201384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.201425 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.201824 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.202320 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.202341 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.202767 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.203373 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.203425 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.203875 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.204017 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.204559 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.204608 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.204944 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.204958 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.205500 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.206106 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.206167 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.212484 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0630 14:19:08.213076 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.213762 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.213782 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.214717 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I0630 14:19:08.214882 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0630 14:19:08.215450 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.215549 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.216208 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216234 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216395 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216419 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216498 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.216551 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0630 14:19:08.217141 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.217198 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.218026 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218644 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218679 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0630 14:19:08.218965 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219098 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0630 14:19:08.219374 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.219416 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.219490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.219517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.219600 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219645 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.220038 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220058 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.220197 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220208 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.222722 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0630 14:19:08.222897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0630 14:19:08.223028 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.223845 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.223892 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.224072 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0630 14:19:08.224388 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36347
	I0630 14:19:08.224623 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.225142 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.225164 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.225248 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I0630 14:19:08.225593 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226043 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226641 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.226692 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.227826 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.228314 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.228351 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.228730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.228753 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.228834 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.228874 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0630 14:19:08.229220 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.229470 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.229681 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.229725 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.230097 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.230128 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.240167 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.240974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.241058 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0630 14:19:08.243477 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.243596 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0630 14:19:08.261647 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0630 14:19:08.261668 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I0630 14:19:08.261862 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0630 14:19:08.262201 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0630 14:19:08.261652 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I0630 14:19:08.261852 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0630 14:19:08.262971 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.263041 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263580 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263640 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263642 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263689 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263697 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263766 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263767 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264204 1558425 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0630 14:19:08.264700 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264710 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264910 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.264924 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265056 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265067 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265244 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265261 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265313 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265330 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265397 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265504 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265522 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265580 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265661 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265661 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:08.265674 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265689 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0630 14:19:08.265696 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265706 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265712 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.265940 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265988 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266721 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266732 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266787 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266802 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266873 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266885 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266892 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266920 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266927 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266935 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266948 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266963 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267095 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267169 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267219 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267412 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267464 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267868 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.267912 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.268375 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.268443 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.268484 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.269549 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.269597 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.270926 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.272833 1558425 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0630 14:19:08.274128 1558425 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.274146 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0630 14:19:08.274171 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.274859 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275064 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275721 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.276192 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275698 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.277235 1558425 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0630 14:19:08.277261 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 14:19:08.277735 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.277888 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0630 14:19:08.277911 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
	I0630 14:19:08.278583 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.278754 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.278813 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.278881 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0630 14:19:08.278897 1558425 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0630 14:19:08.278922 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279033 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.279041 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 14:19:08.279054 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279564 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.279577 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0630 14:19:08.279593 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279642 1558425 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.35
	I0630 14:19:08.281429 1558425 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:08.281448 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0630 14:19:08.281468 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.281533 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.282713 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.283764 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284087 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284228 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:08.284248 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0630 14:19:08.284269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.284461 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284503 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284726 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.284883 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.284950 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284965 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285137 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.285324 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.285515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.285599 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285736 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286034 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.286041 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286069 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286207 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.286615 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.286628 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286660 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286673 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.287215 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287232 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.287998 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287988 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288619 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288647 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.288829 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.288982 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289082 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.289115 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289387 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289495 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289954 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.289983 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.290152 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290230 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290347 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290431 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.291154 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.292418 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.292454 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.292433 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.292721 1558425 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-301682"
	I0630 14:19:08.292738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.292763 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.292887 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.293016 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.293150 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.293200 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.294549 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0630 14:19:08.296018 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0630 14:19:08.297203 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0630 14:19:08.298509 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0630 14:19:08.299741 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0630 14:19:08.301072 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0630 14:19:08.302287 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0630 14:19:08.303246 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0630 14:19:08.303926 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.304284 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0630 14:19:08.304575 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.304600 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.305069 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.305303 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.305513 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0630 14:19:08.305597 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0630 14:19:08.305646 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0630 14:19:08.308495 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0630 14:19:08.309009 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309265 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309301 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309500 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.309544 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309729 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.309915 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.310105 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.310445 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.310557 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.310962 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.310986 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312430 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.312542 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0630 14:19:08.312690 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.312715 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43567
	I0630 14:19:08.312896 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.312908 1558425 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:08.312914 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312922 1558425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 14:19:08.312899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.312950 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.312967 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35789
	I0630 14:19:08.313116 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.313130 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.313608 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.313798 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.314003 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314075 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.314701 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314761 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.314826 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.315163 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315447 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.315638 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315743 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.315801 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.316217 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.316239 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.316441 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.317458 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.317755 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.318404 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.318763 1558425 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0630 14:19:08.319446 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.319608 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.319686 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.319964 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.319978 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320265 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.320279 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:08.320350 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.320357 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320810 1558425 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0630 14:19:08.320976 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0630 14:19:08.321001 1558425 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0630 14:19:08.321024 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.321215 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0630 14:19:08.322277 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0630 14:19:08.322294 1558425 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0630 14:19:08.322314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323097 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323112 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.323135 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:08.323167 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.323175 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:08.323273 1558425 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0630 14:19:08.323158 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.323505 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323867 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0630 14:19:08.323883 1558425 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0630 14:19:08.323899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323920 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.323964 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0630 14:19:08.324118 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.324491 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.324603 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0630 14:19:08.324644 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.324757 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.325272 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.325293 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.327148 1558425 out.go:177]   - Using image docker.io/registry:3.0.0
	I0630 14:19:08.328448 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328463 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.328471 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0630 14:19:08.328485 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.328486 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0630 14:19:08.328506 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0630 14:19:08.328469 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.328555 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.329271 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329296 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329298 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329306 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.329427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329488 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329522 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329831 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329844 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329873 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329893 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.329908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329932 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.329965 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.330048 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330100 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330127 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.330233 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330571 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.330635 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330797 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.331366 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.331539 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.333151 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.333196 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.333924 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.333946 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.334093 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.334267 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.334413 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.334534 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.335093 1558425 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0630 14:19:08.336351 1558425 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:08.336368 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0630 14:19:08.336384 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.339580 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340100 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.340140 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.340523 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.340672 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.340813 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.350360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0630 14:19:08.350984 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.351790 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.351819 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.352186 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.352420 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.354260 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.356054 1558425 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0630 14:19:08.357435 1558425 out.go:177]   - Using image docker.io/busybox:stable
	I0630 14:19:08.358781 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:08.358803 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0630 14:19:08.358828 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.362552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.362966 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.362990 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.363100 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.363314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.363506 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.363630 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.439689 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 14:19:08.476644 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:19:08.843915 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.877498 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.886078 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0630 14:19:08.886117 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0630 14:19:08.911521 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.934599 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:09.020016 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:09.040482 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0630 14:19:09.040511 1558425 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0630 14:19:09.043569 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:09.148704 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:09.202814 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0630 14:19:09.202869 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0630 14:19:09.278194 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0630 14:19:09.278231 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0630 14:19:09.295189 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:09.295224 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0630 14:19:09.299217 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0630 14:19:09.299263 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0630 14:19:09.332360 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0630 14:19:09.332403 1558425 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0630 14:19:09.352402 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0630 14:19:09.352438 1558425 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0630 14:19:09.405398 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:09.451227 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:09.755506 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0630 14:19:09.755546 1558425 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0630 14:19:09.891227 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0630 14:19:09.891271 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0630 14:19:09.920129 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:09.920177 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0630 14:19:09.934092 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0630 14:19:09.934135 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0630 14:19:09.987104 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:09.987162 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0630 14:19:10.065936 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:10.412611 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0630 14:19:10.412651 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0630 14:19:10.472848 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:10.472884 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0630 14:19:10.534908 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:10.637801 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0630 14:19:10.637839 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0630 14:19:10.658361 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:10.787257 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0630 14:19:10.787289 1558425 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0630 14:19:10.989751 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:11.047653 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0630 14:19:11.047693 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0630 14:19:11.196682 1558425 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.196715 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0630 14:19:11.291758 1558425 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.852019855s)
	I0630 14:19:11.291806 1558425 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 14:19:11.291816 1558425 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.815128335s)
	I0630 14:19:11.292560 1558425 node_ready.go:35] waiting up to 6m0s for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314454 1558425 node_ready.go:49] node "addons-301682" is "Ready"
	I0630 14:19:11.314498 1558425 node_ready.go:38] duration metric: took 21.89293ms for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314515 1558425 api_server.go:52] waiting for apiserver process to appear ...
	I0630 14:19:11.314579 1558425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:19:11.614705 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0630 14:19:11.614735 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0630 14:19:11.736486 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0630 14:19:11.736514 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0630 14:19:11.778191 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.869515 1558425 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-301682" context rescaled to 1 replicas
	I0630 14:19:12.215816 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0630 14:19:12.215858 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0630 14:19:12.875440 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0630 14:19:12.875469 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0630 14:19:13.113763 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0630 14:19:13.113791 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0630 14:19:13.233897 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.233936 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0630 14:19:13.547481 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.908710 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.064741353s)
	I0630 14:19:13.908777 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.031226379s)
	I0630 14:19:13.908828 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908848 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908846 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.997298204s)
	I0630 14:19:13.908863 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908877 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908789 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908930 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908964 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.974334377s)
	I0630 14:19:13.908996 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909007 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909009 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.888949022s)
	I0630 14:19:13.909048 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909061 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909699 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.909716 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.909725 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909733 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910126 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910140 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910150 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910156 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910411 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910438 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910445 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910452 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910457 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910696 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910727 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910744 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910751 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910757 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.911970 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912059 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912080 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912106 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.912127 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.912244 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912321 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912376 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912399 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912409 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912423 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912436 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912476 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912487 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913952 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:15.489658 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0630 14:19:15.489718 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:15.493165 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493587 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:15.493623 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493976 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:15.494223 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:15.494515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:15.494707 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:15.765543 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0630 14:19:15.978232 1558425 addons.go:238] Setting addon gcp-auth=true in "addons-301682"
	I0630 14:19:15.978326 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:15.978844 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:15.978897 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:15.997982 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0630 14:19:15.998461 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:15.999138 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:15.999166 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:15.999618 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.000381 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:16.000428 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:16.018425 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0630 14:19:16.018996 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:16.019552 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:16.019578 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:16.020118 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.020378 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:16.022570 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:16.022848 1558425 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0630 14:19:16.022880 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:16.026200 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027053 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:16.027107 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027360 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:16.027605 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:16.027797 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:16.027986 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:16.771513 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.727888765s)
	I0630 14:19:16.771570 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.622822849s)
	I0630 14:19:16.771591 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771607 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771630 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771647 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771647 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.36619116s)
	I0630 14:19:16.771673 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771688 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771767 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.320503654s)
	I0630 14:19:16.771831 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771842 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.705862816s)
	I0630 14:19:16.771865 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771873 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771904 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.236967233s)
	I0630 14:19:16.771940 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771966 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771989 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.113597897s)
	I0630 14:19:16.772016 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772026 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772112 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.782331879s)
	I0630 14:19:16.772132 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772140 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772199 1558425 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.457605469s)
	I0630 14:19:16.772216 1558425 api_server.go:72] duration metric: took 8.637102064s to wait for apiserver process to appear ...
	I0630 14:19:16.772223 1558425 api_server.go:88] waiting for apiserver healthz status ...
	I0630 14:19:16.772245 1558425 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0630 14:19:16.771847 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772472 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772489 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772500 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772508 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772567 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772660 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772670 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772678 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772685 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772744 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772768 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772774 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772782 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772789 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773055 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773073 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773096 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773119 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773125 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773131 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773137 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773371 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773380 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773388 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773398 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773540 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773583 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773592 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773602 1558425 addons.go:479] Verifying addon registry=true in "addons-301682"
	I0630 14:19:16.773651 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773661 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773668 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773675 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773927 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773965 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774128 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774333 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774357 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774383 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774389 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774656 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774694 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774695 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774703 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774710 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774722 1558425 addons.go:479] Verifying addon ingress=true in "addons-301682"
	I0630 14:19:16.774767 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774700 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774931 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.774943 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.774797 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775055 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.775066 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.775086 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.775936 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775954 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776331 1558425 out.go:177] * Verifying ingress addon...
	I0630 14:19:16.776373 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776407 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776413 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776457 1558425 out.go:177] * Verifying registry addon...
	I0630 14:19:16.776565 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776586 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776591 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776599 1558425 addons.go:479] Verifying addon metrics-server=true in "addons-301682"
	I0630 14:19:16.776668 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776681 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.778466 1558425 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0630 14:19:16.779098 1558425 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-301682 service yakd-dashboard -n yakd-dashboard
	
	I0630 14:19:16.779694 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0630 14:19:16.788556 1558425 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0630 14:19:16.789906 1558425 api_server.go:141] control plane version: v1.33.2
	I0630 14:19:16.789941 1558425 api_server.go:131] duration metric: took 17.709666ms to wait for apiserver health ...
	I0630 14:19:16.789955 1558425 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 14:19:16.796628 1558425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0630 14:19:16.796662 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:16.796921 1558425 system_pods.go:59] 15 kube-system pods found
	I0630 14:19:16.796954 1558425 system_pods.go:61] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.796961 1558425 system_pods.go:61] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.796972 1558425 system_pods.go:61] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.796976 1558425 system_pods.go:61] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.796984 1558425 system_pods.go:61] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.796987 1558425 system_pods.go:61] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.796992 1558425 system_pods.go:61] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.796997 1558425 system_pods.go:61] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.797004 1558425 system_pods.go:61] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.797011 1558425 system_pods.go:61] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.797018 1558425 system_pods.go:61] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.797028 1558425 system_pods.go:61] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.797035 1558425 system_pods.go:61] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.797042 1558425 system_pods.go:61] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.797049 1558425 system_pods.go:61] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.797057 1558425 system_pods.go:74] duration metric: took 7.094316ms to wait for pod list to return data ...
	I0630 14:19:16.797068 1558425 default_sa.go:34] waiting for default service account to be created ...
	I0630 14:19:16.798790 1558425 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0630 14:19:16.798807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:16.809885 1558425 default_sa.go:45] found service account: "default"
	I0630 14:19:16.809914 1558425 default_sa.go:55] duration metric: took 12.83884ms for default service account to be created ...
	I0630 14:19:16.809925 1558425 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 14:19:16.818226 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.818251 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.818525 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.818587 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:16.818715 1558425 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0630 14:19:16.836146 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.836179 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.836489 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.836539 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.898260 1558425 system_pods.go:86] 15 kube-system pods found
	I0630 14:19:16.898321 1558425 system_pods.go:89] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.898334 1558425 system_pods.go:89] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.898347 1558425 system_pods.go:89] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.898355 1558425 system_pods.go:89] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.898364 1558425 system_pods.go:89] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.898371 1558425 system_pods.go:89] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.898380 1558425 system_pods.go:89] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.898390 1558425 system_pods.go:89] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.898398 1558425 system_pods.go:89] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.898406 1558425 system_pods.go:89] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.898431 1558425 system_pods.go:89] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.898443 1558425 system_pods.go:89] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.898451 1558425 system_pods.go:89] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.898461 1558425 system_pods.go:89] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.898471 1558425 system_pods.go:89] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.898485 1558425 system_pods.go:126] duration metric: took 88.551205ms to wait for k8s-apps to be running ...
	I0630 14:19:16.898500 1558425 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 14:19:16.898565 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:19:17.317126 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:17.374411 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.596164186s)
	W0630 14:19:17.374478 1558425 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.374547 1558425 retry.go:31] will retry after 162.408109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.425522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.537869 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:17.785630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.785674 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.306660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.306889 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.552015 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.004467325s)
	I0630 14:19:18.552194 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552225 1558425 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.529350239s)
	I0630 14:19:18.552276 1558425 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.653693225s)
	I0630 14:19:18.552302 1558425 system_svc.go:56] duration metric: took 1.653798008s WaitForService to wait for kubelet
	I0630 14:19:18.552318 1558425 kubeadm.go:578] duration metric: took 10.417201876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:19:18.552241 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552348 1558425 node_conditions.go:102] verifying NodePressure condition ...
	I0630 14:19:18.552645 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552664 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552675 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552686 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552919 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552936 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552948 1558425 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:18.554300 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:18.555232 1558425 out.go:177] * Verifying csi-hostpath-driver addon...
	I0630 14:19:18.556214 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0630 14:19:18.556827 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0630 14:19:18.557433 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0630 14:19:18.557459 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0630 14:19:18.596354 1558425 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 14:19:18.596393 1558425 node_conditions.go:123] node cpu capacity is 2
	I0630 14:19:18.596408 1558425 node_conditions.go:105] duration metric: took 44.050461ms to run NodePressure ...
	I0630 14:19:18.596422 1558425 start.go:241] waiting for startup goroutines ...
	I0630 14:19:18.603104 1558425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0630 14:19:18.603135 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:18.637868 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0630 14:19:18.637900 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0630 14:19:18.748099 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:18.748163 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0630 14:19:18.792604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.792626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.843691 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:19.062533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.282741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.282766 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:19.563538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.721889 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.183953285s)
	I0630 14:19:19.721971 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.721990 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.722705 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:19.722805 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.722841 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.722861 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.722870 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.723362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.723392 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.784854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.785087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.084451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.338994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.339229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.491192 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.647431709s)
	I0630 14:19:20.491275 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491294 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491664 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.491685 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.491696 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491704 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491987 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:20.492026 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.492052 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.493344 1558425 addons.go:479] Verifying addon gcp-auth=true in "addons-301682"
	I0630 14:19:20.495394 1558425 out.go:177] * Verifying gcp-auth addon...
	I0630 14:19:20.497751 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0630 14:19:20.544088 1558425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0630 14:19:20.544122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:20.616283 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.790338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.794229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.001876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.103156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.286215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:21.287404 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.501971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.603568 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.782426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.783543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.002607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.061769 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.283406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.283458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.501544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.563768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.782065 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.785105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.001506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.062272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.283151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.283566 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.501628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.782561 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.783298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.001778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.062179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.351397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.351533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:24.502302 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.560819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.783532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.783606 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.000665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.066861 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.283070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:25.283328 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.501446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.566260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.782894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.783547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.005011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.064792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.283606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.502271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.561300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.782991 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.783050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.001311 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.061332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.282733 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:27.284226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.501814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.562410 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.783241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.783497 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.002164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.060264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.282980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:28.283180 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.500523 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.560485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.783545 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.000985 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.061185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.282663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.282792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.500648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.560782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.782042 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.783619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.001946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.060881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.282133 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:30.283049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.500975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.782609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.782603 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.001534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.060703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.282157 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.283847 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:31.500628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.560669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.782294 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.782820 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.001862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.061034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.281959 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:32.282969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.501719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.783890 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.001382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.060618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:33.289955 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.501909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.782531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.784168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.003605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.060279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.282397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:34.282808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.613798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.614652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.782800 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.000818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.060998 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.282231 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.283653 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:35.509348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.560724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.783017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.001083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.060369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.702785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.703123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.703555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.706970 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:36.804241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.804456 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.001688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.061214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.282908 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.284915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:37.500826 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.560092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.782407 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.784106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.061107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:38.282046 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:38.283180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.501297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.563927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.189422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.189531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.190495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.191248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.282505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.282920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.500781 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.560685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.781821 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.001299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.071624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.283182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.283221 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:40.501026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.560313 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.783565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.783591 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.002088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.079056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.283365 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.283894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:41.501095 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.565670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.781792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.782774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.000619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.060899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:42.283068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.501445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.560361 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.783776 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.783964 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.001605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.060231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.284417 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:43.284499 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.501005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.560455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.782135 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.783795 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.001747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.061008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:44.281520 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:44.282610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.501859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.561166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.190446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.291473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291489 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.291572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.293575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.501432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.560935 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.782091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.783835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.001576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.060855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.281632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.282695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.500503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.560648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.781708 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.783401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.001349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.060664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.288991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.289151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.501378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.783679 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.783934 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.063640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.283018 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.288264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.501060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.782532 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.783014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.060136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.284470 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.284616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.501493 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.560740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.782176 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.783205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.001724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.061175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.285556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.285655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.501435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.561083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.782238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.783288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.001421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.060971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.312768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.312922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.501057 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.560396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.782795 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.783117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.001134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.060267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.283193 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.283291 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.502021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.560380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.783076 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.784387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.001939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.061183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.281990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:53.283259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.502028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.560640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.782501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.783649 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.001220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.061666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.282039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.283121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.501316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.560447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.783504 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.783727 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.000517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.061087 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.282418 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.283456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:55.502008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.560325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.783555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.001431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.060991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.282249 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.283767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.501025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.560838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.782271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.782994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.001527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.061065 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.283743 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.283956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:57.502182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.560567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.783238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.783763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.001345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.060462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.282685 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.282967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:58.501929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.561387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.782616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.783122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.001904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.282072 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:59.282798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.501590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.561148 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.783157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.783870 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.000897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.061506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.281697 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.282838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:00.500884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.561577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.783296 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.002271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.061072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.282434 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:01.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.501896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.561570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.782586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.782842 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.000727 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.282765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:02.282809 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.501507 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.782628 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.782871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.001603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.060848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.282653 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.283752 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.560629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.781639 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.782897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.283389 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.283730 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:04.500996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.783260 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.001555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.060738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.282896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.282927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.501053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.602159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.783741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.783966 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.001070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.060590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.282798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.500761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.560993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.784950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.785237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.001699 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.061334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.282883 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.283203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.502196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.561691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.783652 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.001648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.061773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.281568 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.283567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.502500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.561076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.782892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.783238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.001899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.282681 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.283009 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:09.501744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.561385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.782769 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.783806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.282325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:10.283050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.501741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.783200 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.001016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.060512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.283758 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.284197 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:11.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.560441 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.782907 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.783577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.001888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.060849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.282280 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:12.282418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.501807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.002304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.061129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.283315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.501972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.561333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.783487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.783655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.001242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.282022 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.283080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:14.501717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.560630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.781894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.782368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.282562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.282888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.500950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.560206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.782473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.783016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.001340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.283196 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:16.501224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.560432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.783077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.783121 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.281574 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.282511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:17.502499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.560896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.781956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.782624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.000392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.060943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.283184 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.283879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.501537 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.562926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.782451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.001149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.061264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.282752 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:19.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.560605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.782509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.782554 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.002254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.282485 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:20.500924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.561822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.002205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.060747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.282021 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.282563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.505254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.561819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.782724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.000999 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.060710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.281865 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:22.282163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.562175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.782908 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.782992 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.001604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.061218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.282416 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.282830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:23.501539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.562050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.784161 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.001477 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.060126 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.282030 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.283809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.501806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.602840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.782907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.000878 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.061123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.282013 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.283761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.504764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.606761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.782107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.782874 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.000621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.061556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.285974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.286315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.502580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.561105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.000735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.061233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.282071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:27.285152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.501573 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.561120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.782732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.782840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.000630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.060922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.283472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.501080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.560454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.782967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.782976 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.237835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.237889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.336150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.336331 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.501907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.602786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.782929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.001264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.060690 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.281762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.282475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.501884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.572349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.783064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.783109 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.002526 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.062561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:31.283179 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.501139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.560586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.784336 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.784346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.001433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.290054 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.291744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.500808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.568201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.782533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.001710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.282933 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.284426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.501589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.561081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.784027 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.784261 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.002823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.063430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.284309 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.285663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.500807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.561036 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.784211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.784213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.001454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.061492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.281525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.282364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.501644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.560943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.783199 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.783563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.002111 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.060708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.281535 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:36.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.861446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.861593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.965825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.966272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.061370 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.283380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:37.283513 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.501468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.561192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.785517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.786292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.061069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.284714 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.284846 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.502574 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.783069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.001928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.061873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.282406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.283481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.503169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.561098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.782813 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.783641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.002181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.060266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.282891 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.283849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.500843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.782926 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.783029 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.001321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.281798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.284037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.502572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.782285 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.001897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.283725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.283888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:42.501480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.561461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.782548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.782713 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.093940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.097843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.282818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.282819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:43.501106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.560130 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.782663 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.783944 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.001422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.060503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.281922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:44.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.501600 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.782953 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.001192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.060597 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.283117 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.501174 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.560528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.786937 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.787508 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.003194 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.061532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.283078 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.283645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.501606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.783577 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.061088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.282533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.501685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.783792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.783801 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.000652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.061347 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.282791 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.283149 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.501196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.560571 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.782724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.783665 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.001578 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.060917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.283443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.283529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.501548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.560886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.782606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.782806 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.001040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.060499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.282867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.283070 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.501307 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.782746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.782790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.000827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.061599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.281741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.282303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:51.501882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.561159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.782745 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.784064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.001127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.060734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.281924 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.282442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.501618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.560955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.001976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.060014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.283833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.283868 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.501946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.787788 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.788281 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.001841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.282587 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.282894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.501076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.560738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.783982 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.784379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.001546 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.061794 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.282534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:55.283165 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.501579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.560818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.001725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.282248 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.283345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:56.501508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.781927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.783218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.001706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.061118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.283582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.283762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:57.501038 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.560439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.783590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.783720 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.001746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.061827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.282480 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.282960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:58.501434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.561028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.781998 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.782879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.001764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.061200 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.282609 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:59.282747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.501377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.560960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.785243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.785330 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.001691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.061010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.283580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.561741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.784015 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.784091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.060981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.282859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:01.283036 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.501809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.561922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.782501 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.783709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.002244 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.061572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.284366 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.501516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.562167 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.782718 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.783603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.002195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.060569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.283492 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:03.501693 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.783852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.784006 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.000924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.061226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.282297 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.282987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.501089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.560458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.783361 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.001357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.060980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.282432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.284945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:05.501078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.560392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.782556 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.782745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.001356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.060485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.282979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:06.500697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.561446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.783120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.783258 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.001429 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.060755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.281892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.282422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.501870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.561285 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.783869 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.001179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.061434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.282620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:08.282643 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.501890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.561334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.782409 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.060624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.283843 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:09.500869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.561327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.786343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.786990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.001363 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.061669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.281724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.283241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.501499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.560382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.783379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.783703 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.006867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.061528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.282068 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.284097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:11.501425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.561482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.781830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.003000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.061220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.283490 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.283632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.502107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.560563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.786245 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.787717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.002660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.061638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.282127 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.283171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.501269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.560543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.783150 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.783156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.001885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.061206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.283314 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.283499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.505208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.782762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.003346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.282760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.284010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.501266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.560665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.781811 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.782474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.263325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.263338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.283738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.502117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.604450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.783760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.005983 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.105360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.500988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.560342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.782772 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.007857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.061140 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.283796 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.501209 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.560948 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.783319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.783461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.001371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.061031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.282807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:19.283969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.501517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.561032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.782932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.783012 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.005480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.060901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.283412 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:20.502027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.782626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.783395 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.001871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.061472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.283060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.283210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.782741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.783745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.001089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.060638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.283014 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.560933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.782511 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.783627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.001249 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.060586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.281968 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.282925 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.501824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.561702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.781838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.782821 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.000909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.061364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.282635 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.282833 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.500870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.561501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.783353 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.783411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.001919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.060593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.501682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.560920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.782607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.001990 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.062631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.281975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:26.283634 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.502337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.561388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.783873 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.000786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.061090 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.282519 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.283219 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:27.502098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.560684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.782103 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.782356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.001961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.283082 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:28.283091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.502080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.560369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.782819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.782888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.001300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.060528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.282927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:29.500881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.561931 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.782352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.001314 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.061754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:30.283911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.501691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.561708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.783505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.018759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.118123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.283780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.283813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:31.500732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.561257 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.782789 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.783857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.000941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.061352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.283225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.283376 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.502377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.560813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.782071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.782893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.001627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.061719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.282356 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.501995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.560218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.783100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.783628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.061301 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.282792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.283319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.502265 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.603312 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.783237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.001558 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.282165 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.501433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.782571 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.783567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.001993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.060500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.282630 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:36.282912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.501547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.561085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.783668 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.783838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.001644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.061735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.282616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.283047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.501624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.562291 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.783863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.784060 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.001210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.060997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.283100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.283242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.501949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.561400 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.783522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.783562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.001632 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.061775 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.283431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:39.283517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.502108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.782288 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.783100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.061613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.282272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.282780 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.782057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.783645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.002564 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.062621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.282271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:41.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.501391 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.562411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.783324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.783579 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.002705 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.061893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.282583 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.283671 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.502733 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.562940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.782853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.783073 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.001824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.062102 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.282830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.283751 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.501119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.560492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.784115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.784145 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.001522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.061345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.282831 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.283549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:44.503997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.607178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.782832 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.783717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.002427 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.061729 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.282878 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:45.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.501997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.783552 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.783659 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.001682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.062807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.282597 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.283939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.503275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.561513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.784613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.784911 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.061725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:47.283405 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.501322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.561186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.782927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.784021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.001774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.282175 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.283210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.502097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.561677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.783039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.001787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.071403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.282882 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.283702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.501062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.560808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.781892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.782731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.001262 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.282041 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:50.283114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.501527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.561365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.786406 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.786567 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.001808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.061553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.282657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:51.283296 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.501742 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.561178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.782922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.783680 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.061514 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.282067 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.282621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.502198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.561158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.782564 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.782792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.001035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.060667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.281989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.283220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.501930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.560987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.783173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.004903 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.061068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.281852 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.282368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.501595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.561905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.782333 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.783021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.001532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.060924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.281744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.282438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.501581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.561843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.783311 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.784241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.001655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.061418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.282846 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.501645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.562026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.782767 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.000993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.061640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.282555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:57.284099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.501478 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.561337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.001026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.061636 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.283771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:58.284039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.501701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.564159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.782721 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.783561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.001195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.062667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.286778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.287064 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:59.501183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.560532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.783236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.783406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.062563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.283855 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.284134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.564486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.782887 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.782984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.061955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.283003 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.283746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.501317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.560704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.782191 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.783094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.001320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.061973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.283076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.283282 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:02.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.561666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.783208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.783342 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.004810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.063284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.283432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.283755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:03.501473 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.782327 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.783798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.001354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.060898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.283327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.283635 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.501503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.560912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.782536 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.783678 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.001055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.284013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.501292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.782798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.001516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.061337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.283371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.502565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.562077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.783138 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.783697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.062329 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.282379 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.282968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.501169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.560984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.782268 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.784049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.001494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.061308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.283724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.284185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.502230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.560967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.783790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.783900 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.001053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.060828 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.283284 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.283806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.501109 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.560617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.783349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.001664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.061833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.283401 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:10.283402 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.501704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.560961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.783469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.783522 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.001757 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.061124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.283792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:11.283989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.501103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.560840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.782033 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.783604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.003374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.060433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.282976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.283110 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:12.501047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.783921 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.784167 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.002696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.063144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.282766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:13.282879 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.501555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.561637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.781893 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.782616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.001004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.283205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.283446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:14.501550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.562143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.783957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.784112 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.001423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.062033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.282424 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:15.501071 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.560348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.782780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.783648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.282525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:16.283260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.501360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.560258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.783827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.783875 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.001565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.060813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.283097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:17.501048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.560778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.781850 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.783463 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.002176 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.060602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.501844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.783600 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.783637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.002695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.061454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.282337 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:19.284196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.501898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.566207 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.783150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.783388 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.001915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.063129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.284273 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.285468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:20.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.560957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.785008 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.785055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.001554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.061007 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.290166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.290315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.607046 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.783112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.001610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.061225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.282696 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:22.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.501584 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.562703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.782599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.783389 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.002163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.283818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.283940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:23.501359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.561687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.781738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.783834 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.001106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.060840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.283144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.283159 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:24.501879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.561177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.784299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.784387 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.001461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.060909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.282763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.283372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:25.501554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.782472 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.002067 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.060538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.282323 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:26.284932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.501783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.561217 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.786385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.786624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.002328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.060923 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.283369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:27.502704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.561567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.783609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.001238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.061117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.283592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:28.283779 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.503754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.561835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.783295 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.783426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.061565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.284407 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.284751 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.501482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.561448 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.783747 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.000612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.061762 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.282244 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.282945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.501114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.561086 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.783309 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.783420 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.001952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.060101 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.282326 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.284221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:31.501777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.561372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.783156 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.783322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.002694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.061381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:32.284529 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.505575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.566298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.784512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.784864 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.001675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.060993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.283872 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.501278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.560542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.787772 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.787934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.001324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.060773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.282840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:34.502371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.560627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.783094 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.783413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.002904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.061777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.283934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.501100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.560247 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.784358 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.001812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.062616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.282087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:36.282661 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.500966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.562267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.783442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.001767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.061035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.282352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:37.501481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.562204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.782528 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.783035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.001204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.060871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.282324 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.283278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.501823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.562308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.784023 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.784618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.000984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.062203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.283474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.502760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.563797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.782847 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.782939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.061550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.281624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.282091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.501221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.560905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.782931 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.782945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.002061 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.061582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.283006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.283254 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:41.501580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.785372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.785518 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.001833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.064672 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.282529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.283845 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:42.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.783728 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.784425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.002525 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.061268 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.283438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.283504 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:43.501326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.561048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.782534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.782716 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.001543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.062385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.282669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.283862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:44.501191 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.562184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.002615 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.282873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.283074 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.501319 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.560538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.781794 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.783447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.002122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.060715 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.282111 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:46.282760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.501006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.560037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.784753 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.784785 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.001157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.060804 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.283335 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:47.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.782851 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.783119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.001360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.282370 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:48.283342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.501709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.783888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.784092 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.001883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.283083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:49.283344 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.501731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.782681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.000966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.060550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.283074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:50.501643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.561462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.783025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.002569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.063186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.283275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:51.283325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.501455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.560436 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.782975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.783423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.001631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.061667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.281818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.282342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.501284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.560864 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.782151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.782348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.007368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.060641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.283706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.284276 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:53.501189 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.560654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.782398 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.782656 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.002682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.061286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.282383 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.283815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.501271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.560549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.790530 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.790755 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.001308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.284397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.284413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:55.501771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.781963 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.782941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.000822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.061650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.283524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.283580 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.501667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.560681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.782151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.281690 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.283202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.501647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.561213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.782612 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.001789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.282211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:58.284618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.500839 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.561378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.784612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.784669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.000744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.062091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.660112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.664035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:59.664534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.665074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.782692 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.783576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.003476 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.061094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.285714 1558425 kapi.go:107] duration metric: took 3m43.507242469s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0630 14:23:00.286859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.502299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.561094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.001892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.061673 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.501245 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.005689 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.283736 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.501952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.783177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.002017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.061604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.500854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.561092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.783701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.063589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.283519 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.501728 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.566277 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.002269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:05.060852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.283974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.507100 1558425 kapi.go:107] duration metric: took 3m45.009344267s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0630 14:23:05.509228 1558425 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-301682 cluster.
	I0630 14:23:05.510978 1558425 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0630 14:23:05.512549 1558425 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0630 14:23:05.561380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.783374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.062392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.561684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.785144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.066028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.284562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.561973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.785021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.060666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.561745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.783877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.284091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.561492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.787449 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.062802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.284110 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.560730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.783003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.060643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.284380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.561869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.060853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.283759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.560457 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.784225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.061224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.560056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.783513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.061509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.283696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.561206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.784675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.061356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.284952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.784123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.061089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.786612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.061952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.284288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.561055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.061797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.783185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.061655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.285318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.561730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.782858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.061290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.284108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.560495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.783799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.060435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.560658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.784042 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.064259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.283397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.562304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.783790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.062882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.565989 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.061006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.284421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.561604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.783815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.060798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.283106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.572104 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.783229 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.283003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.783676 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.061789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.283647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.561595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.784152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.061056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.284078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.561025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.060975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.284112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.561034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.783332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.060612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.284928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.560487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.784282 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.061202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.283691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.561004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.783682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.283339 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.561471 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.783951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.060926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.283825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.563195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.783726 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.060359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.283321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.561124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.061349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.283415 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.784344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.061159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.283670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.562677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.783294 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.062782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.284848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.560236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.783962 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.060039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.283768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.560166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.782740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.060825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.284072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.561353 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.783269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.061500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.283553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.561115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.784062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.560612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.784453 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.061524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.283887 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.560352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.783080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.060608 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.283756 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.561250 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.783439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.061813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.284043 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.560423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.783723 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.062299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.283512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.562182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.783464 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.283290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.561127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.784143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.062746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.283685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.561750 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.783610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.061340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.284254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.783030 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.060658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.283841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.561356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.783263 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.061883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.561440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.783774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.060233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.561692 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.783771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.060778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.283008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.560248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.784031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.061426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.284243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.561964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.783354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.061484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.283980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.060942 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.284120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.782802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.059964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.283717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.560585 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.784927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.061040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.283344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.561904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.783533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.284877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.560774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.784163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.061765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.284774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.561857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.782773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.061141 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.283396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.561139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.783625 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.283747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.560949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.783456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.061482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.560735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.784827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.282806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.560671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.782706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.060646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.283286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.560657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.061560 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.283579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.561242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.783654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.061539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.283732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.560228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.783593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.061818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.561190 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.783368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.062755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.283379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.783976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.061115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.285316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.783381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.061707 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.560899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.783331 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.060911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.285242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.567687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.783399 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.284164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.561303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.784575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.062079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.283362 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.561544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.784026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.061171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.284055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.784816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.061671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.285032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.782955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.060555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.283695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.561223 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.784108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.061443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.283885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.560716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.783754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.061542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.282788 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.560770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.783579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.060318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.283045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.560843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.782930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.061222 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.282971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.783818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.060551 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.283550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.562179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.784378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.062214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.283320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.560609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.060891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.283079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.561022 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.783812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.060803 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.283620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.561450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.784169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.061522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.283646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.561354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.784907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.061231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.283357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.561047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.782954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.062644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.283870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.560460 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.783972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.061026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.283434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.560383 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.784236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.061863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.561072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.784790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.060929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.560849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.061044 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.283485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.560958 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.783343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.283256 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.560785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.783833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.063333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.561202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.783647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.060633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.283403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.561258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.783824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.560614 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.783666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.060343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.283562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.561179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.783181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.061128 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.284062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.560766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.783336 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.061890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.283765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.782988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.061782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.284045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.560892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.783646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.061732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.283168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.561039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.783011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.060663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.284034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.560401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.783929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.060886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.560898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.783070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.061272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.284495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.566045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.785033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.284857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.563055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.782917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.062050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.288461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.560836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.783182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.060851 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.282596 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.561215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.061881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.784227 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.061049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.283508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.560991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.783228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.061557 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.283945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.560814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.783480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.062151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.283328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.561147 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.061581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.284088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.561199 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.784000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.060829 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.283475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.783246 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.061297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.283184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.561060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.060947 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.284652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.560498 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.783783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.061342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.284840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.791617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.061618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.286833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.560475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.783629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.061136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.283837 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.562671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.783967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.060688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.283033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.560616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.783876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.060565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.283359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.561198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.783494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.062642 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.283954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.560177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.782981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.060549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.283643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.561232 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.783995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.060913 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.283540 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.561001 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.061494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.561423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.783816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.061121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.283938 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.560330 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.061253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.283468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.783656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.061451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.284555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.561027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.783118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.060941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.283486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.783987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.061469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.282865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.560230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.783905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.060919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.284341 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.561725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.061064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.283364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.061012 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.560317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.783830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.060685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.283378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.561716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.782965 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.061099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.282813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.783665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.061372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.282565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.561326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.783180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.060939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.283013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.783206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.283487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.560928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.779853 1558425 kapi.go:107] duration metric: took 6m0.000148464s to wait for kubernetes.io/minikube-addons=registry ...
	W0630 14:25:16.780114 1558425 out.go:270] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0630 14:25:17.061823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:17.560570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.061810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.557742 1558425 kapi.go:107] duration metric: took 6m0.000905607s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0630 14:25:18.557918 1558425 out.go:270] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0630 14:25:18.560047 1558425 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, ingress, gcp-auth
	I0630 14:25:18.561439 1558425 addons.go:514] duration metric: took 6m10.426236235s for enable addons: enabled=[cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots ingress gcp-auth]
	I0630 14:25:18.561506 1558425 start.go:246] waiting for cluster config update ...
	I0630 14:25:18.561537 1558425 start.go:255] writing updated cluster config ...
	I0630 14:25:18.561951 1558425 ssh_runner.go:195] Run: rm -f paused
	I0630 14:25:18.569844 1558425 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:18.574216 1558425 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.580161 1558425 pod_ready.go:94] pod "coredns-674b8bbfcf-gcxhf" is "Ready"
	I0630 14:25:18.580187 1558425 pod_ready.go:86] duration metric: took 5.939771ms for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.583580 1558425 pod_ready.go:83] waiting for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.589631 1558425 pod_ready.go:94] pod "etcd-addons-301682" is "Ready"
	I0630 14:25:18.589656 1558425 pod_ready.go:86] duration metric: took 6.047747ms for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.592675 1558425 pod_ready.go:83] waiting for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.598838 1558425 pod_ready.go:94] pod "kube-apiserver-addons-301682" is "Ready"
	I0630 14:25:18.598865 1558425 pod_ready.go:86] duration metric: took 6.165834ms for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.608664 1558425 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.974819 1558425 pod_ready.go:94] pod "kube-controller-manager-addons-301682" is "Ready"
	I0630 14:25:18.974852 1558425 pod_ready.go:86] duration metric: took 366.160564ms for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.183963 1558425 pod_ready.go:83] waiting for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.575199 1558425 pod_ready.go:94] pod "kube-proxy-cm28f" is "Ready"
	I0630 14:25:19.575240 1558425 pod_ready.go:86] duration metric: took 391.247311ms for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.774681 1558425 pod_ready.go:83] waiting for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.173968 1558425 pod_ready.go:94] pod "kube-scheduler-addons-301682" is "Ready"
	I0630 14:25:20.174011 1558425 pod_ready.go:86] duration metric: took 399.300804ms for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.174030 1558425 pod_ready.go:40] duration metric: took 1.603886991s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:20.223671 1558425 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 14:25:20.225538 1558425 out.go:177] * Done! kubectl is now configured to use "addons-301682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.444445041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293936444421125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22a81b79-19bd-4235-8058-a942257a409a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.447089055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd110231-ffd8-431c-bd4a-038142bb6328 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.447169074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd110231-ffd8-431c-bd4a-038142bb6328 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.447928762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-cont
roller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be14803
9a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e
956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886cabb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]
string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.p
od.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER
_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_R
UNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293
150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9d
e234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd110231-ffd8-431c-bd4a-038142bb6328 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.489882232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=071958b5-2766-4df5-b738-a8989d3bfa14 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.490006041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=071958b5-2766-4df5-b738-a8989d3bfa14 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.491122794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49cb833d-cc12-4d88-b3b1-095ecff5be9d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.492124665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293936492097654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49cb833d-cc12-4d88-b3b1-095ecff5be9d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.492617213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffa44fb6-af96-4c68-9e1c-37dc537f9d83 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.492806885Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffa44fb6-af96-4c68-9e1c-37dc537f9d83 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.493728817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-cont
roller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be14803
9a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e
956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886cabb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]
string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.p
od.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER
_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_R
UNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293
150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9d
e234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ffa44fb6-af96-4c68-9e1c-37dc537f9d83 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.535755350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ebe385f-227e-4b92-ba7e-3ecbf3a48801 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.535835472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ebe385f-227e-4b92-ba7e-3ecbf3a48801 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.537048307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a723bbd-66e2-4377-901c-c68e9dba26ab name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.538033806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293936538003078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a723bbd-66e2-4377-901c-c68e9dba26ab name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.538706731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57e45a91-6735-4090-b073-e3f789af8fc7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.538759957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57e45a91-6735-4090-b073-e3f789af8fc7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.539250235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-cont
roller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be14803
9a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e
956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886cabb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]
string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.p
od.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER
_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_R
UNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293
150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9d
e234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57e45a91-6735-4090-b073-e3f789af8fc7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.574724777Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae429f0a-ac64-4072-8e35-29f5a5c90f78 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.574802154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae429f0a-ac64-4072-8e35-29f5a5c90f78 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.576122297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0732e5dc-8d4a-43ae-95f0-1d8b3c622944 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.577107743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293936577083384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0732e5dc-8d4a-43ae-95f0-1d8b3c622944 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.577786339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=239cb5d5-9477-4bdc-8a95-8e3108a3f2bc name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.577926505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=239cb5d5-9477-4bdc-8a95-8e3108a3f2bc name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:32:16 addons-301682 crio[849]: time="2025-06-30 14:32:16.578473368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:
map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-con
troller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-cont
roller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be14803
9a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e
956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9xc5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886cabb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]
string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.p
od.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER
_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_R
UNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293
150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9d
e234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,PodSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=239cb5d5-9477-4bdc-8a95-8e3108a3f2bc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	ccb1fec83c55c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   744d3a8558a51       busybox
	f4356fb8a203d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	505ec6a97e3e1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	0e8810b68e820       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            8 minutes ago       Running             liveness-probe                           0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	977ef3af77456       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           9 minutes ago       Running             hostpath                                 0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	12db79e5b741e       registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15                             9 minutes ago       Running             controller                               0                   e27c33843e336       ingress-nginx-controller-67687b59dd-hqql8
	5dfe9d02b1b1a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                9 minutes ago       Running             node-driver-registrar                    0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	470ef449849e9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      11 minutes ago      Running             volume-snapshot-controller               0                   4736a1c095805       snapshot-controller-68b874b76f-m97pd
	90d1724e2a8e9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   11 minutes ago      Running             csi-external-health-monitor-controller   0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	089511c925cdb       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      11 minutes ago      Running             volume-snapshot-controller               0                   901b27bd18ec3       snapshot-controller-68b874b76f-zvnk2
	c2e8c85ce8151       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              11 minutes ago      Running             csi-resizer                              0                   754958dc28d19       csi-hostpath-resizer-0
	ba49554ce7e85       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             11 minutes ago      Running             csi-attacher                             0                   ef302c090f9a8       csi-hostpath-attacher-0
	78d53c20b85a8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf                   11 minutes ago      Exited              patch                                    0                   4e975a881fa17       ingress-nginx-admission-patch-9xc5z
	1322675057a2e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             11 minutes ago      Running             local-path-provisioner                   0                   54b7dce23ad65       local-path-provisioner-76f89f99b5-gzp6b
	8394bba22fffd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf                   11 minutes ago      Exited              create                                   0                   7cdcf7a057d5a       ingress-nginx-admission-create-fnqjq
	87b37034569df       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              11 minutes ago      Running             registry-proxy                           0                   ab80df45e204e       registry-proxy-2dgr9
	aca5b14e1bc43       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             12 minutes ago      Running             minikube-ingress-dns                     0                   7f285ffa7ac9c       kube-ingress-dns-minikube
	70d635c9d667c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     12 minutes ago      Running             amd-gpu-device-plugin                    0                   3d37e16d91d2b       amd-gpu-device-plugin-g5z6w
	f3766ac202b89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             12 minutes ago      Running             storage-provisioner                      0                   97a7ca87e0fdb       storage-provisioner
	5aadabb8b1bfc       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                                                             13 minutes ago      Running             coredns                                  0                   78956e77203cb       coredns-674b8bbfcf-gcxhf
	f10061ba824c0       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                                                             13 minutes ago      Running             kube-proxy                               0                   b60868a950e81       kube-proxy-cm28f
	ccc99095a0e73       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e                                                                             13 minutes ago      Running             kube-apiserver                           0                   3b49e7f986574       kube-apiserver-addons-301682
	b4d0fe15b4640       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                                                             13 minutes ago      Running             kube-controller-manager                  0                   793d3507bd395       kube-controller-manager-addons-301682
	a117b554832ef       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                                                             13 minutes ago      Running             etcd                                     0                   ecf8d198683c7       etcd-addons-301682
	4e556fe1e25cc       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                                                             13 minutes ago      Running             kube-scheduler                           0                   d882c0c670fce       kube-scheduler-addons-301682
	
	
	==> coredns [5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2] <==
	[INFO] 10.244.0.7:45729 - 1768 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000200332s
	[INFO] 10.244.0.7:56789 - 42719 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000286303s
	[INFO] 10.244.0.7:56789 - 59592 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000134004s
	[INFO] 10.244.0.7:56789 - 12726 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000183444s
	[INFO] 10.244.0.7:56789 - 16278 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000145469s
	[INFO] 10.244.0.7:56789 - 12173 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.0001245s
	[INFO] 10.244.0.7:56789 - 20390 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000122198s
	[INFO] 10.244.0.7:56789 - 10023 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000180615s
	[INFO] 10.244.0.7:56789 - 61578 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000126074s
	[INFO] 10.244.0.7:39365 - 13051 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000692643s
	[INFO] 10.244.0.7:39365 - 23052 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000292461s
	[INFO] 10.244.0.7:39365 - 25105 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000120455s
	[INFO] 10.244.0.7:39365 - 56649 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000095184s
	[INFO] 10.244.0.7:39365 - 11091 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000127991s
	[INFO] 10.244.0.7:39365 - 33 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000272931s
	[INFO] 10.244.0.7:39365 - 58953 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000147051s
	[INFO] 10.244.0.7:39365 - 46805 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000498615s
	[INFO] 10.244.0.7:46751 - 2522 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000165248s
	[INFO] 10.244.0.7:46751 - 4295 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000334414s
	[INFO] 10.244.0.7:46751 - 23163 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000089829s
	[INFO] 10.244.0.7:46751 - 27516 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000227308s
	[INFO] 10.244.0.7:46751 - 24806 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00007612s
	[INFO] 10.244.0.7:46751 - 32821 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000319279s
	[INFO] 10.244.0.7:46751 - 15253 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000104656s
	[INFO] 10.244.0.7:46751 - 55060 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000131999s
	
	
	==> describe nodes <==
	Name:               addons-301682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-301682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=addons-301682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-301682
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-301682"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-301682
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:32:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:19:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    addons-301682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3f7748b45e54c5d95a766f7ac118097
	  System UUID:                c3f7748b-45e5-4c5d-95a7-66f7ac118097
	  Boot ID:                    4dcad91c-eb4d-46c9-ae52-10be6c00fd59
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m56s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  ingress-nginx               ingress-nginx-controller-67687b59dd-hqql8                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         13m
	  kube-system                 amd-gpu-device-plugin-g5z6w                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-674b8bbfcf-gcxhf                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-h4qg2                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-301682                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-301682                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-301682                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cm28f                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-301682                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 registry-694bd45846-x8cnn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 registry-proxy-2dgr9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-68b874b76f-m97pd                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-68b874b76f-zvnk2                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  local-path-storage          local-path-provisioner-76f89f99b5-gzp6b                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-301682 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-301682 event: Registered Node addons-301682 in Controller
	
	
	==> dmesg <==
	[  +3.981178] kauditd_printk_skb: 99 callbacks suppressed
	[ +14.133007] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.888041] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 14:20] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.101498] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:21] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.564016] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000063] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.018820] kauditd_printk_skb: 4 callbacks suppressed
	[Jun30 14:22] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.468740] kauditd_printk_skb: 33 callbacks suppressed
	[Jun30 14:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.720029] kauditd_printk_skb: 37 callbacks suppressed
	[Jun30 14:25] kauditd_printk_skb: 33 callbacks suppressed
	[  +3.578772] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.590938] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.177192] kauditd_printk_skb: 20 callbacks suppressed
	[Jun30 14:26] kauditd_printk_skb: 4 callbacks suppressed
	[ +46.460054] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:27] kauditd_printk_skb: 2 callbacks suppressed
	[ +35.275184] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:29] kauditd_printk_skb: 9 callbacks suppressed
	[ +22.041327] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:30] kauditd_printk_skb: 2 callbacks suppressed
	[Jun30 14:31] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb] <==
	{"level":"info","ts":"2025-06-30T14:21:16.254726Z","caller":"traceutil/trace.go:171","msg":"trace[347540210] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"200.343691ms","start":"2025-06-30T14:21:16.054373Z","end":"2025-06-30T14:21:16.254716Z","steps":["trace[347540210] 'agreement among raft nodes before linearized reading'  (duration: 200.191188ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:21:16.254998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.889254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:21:16.255051Z","caller":"traceutil/trace.go:171","msg":"trace[2072353184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"259.964064ms","start":"2025-06-30T14:21:15.995079Z","end":"2025-06-30T14:21:16.255043Z","steps":["trace[2072353184] 'agreement among raft nodes before linearized reading'  (duration: 259.892612ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:21:16.256094Z","caller":"traceutil/trace.go:171","msg":"trace[752785918] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"419.629539ms","start":"2025-06-30T14:21:15.836340Z","end":"2025-06-30T14:21:16.255969Z","steps":["trace[752785918] 'process raft request'  (duration: 416.770167ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:21:16.256259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:21:15.836292Z","time spent":"419.882706ms","remote":"127.0.0.1:55816","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1189 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-06-30T14:22:57.074171Z","caller":"traceutil/trace.go:171","msg":"trace[97580462] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"235.032412ms","start":"2025-06-30T14:22:56.839110Z","end":"2025-06-30T14:22:57.074143Z","steps":["trace[97580462] 'process raft request'  (duration: 234.613297ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.462692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650406Z","caller":"traceutil/trace.go:171","msg":"trace[1036457483] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"155.081366ms","start":"2025-06-30T14:22:59.495275Z","end":"2025-06-30T14:22:59.650356Z","steps":["trace[1036457483] 'range keys from in-memory index tree'  (duration: 154.411147ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:22:59.650586Z","caller":"traceutil/trace.go:171","msg":"trace[806257844] transaction","detail":"{read_only:false; response_revision:1386; number_of_response:1; }","duration":"115.895314ms","start":"2025-06-30T14:22:59.534680Z","end":"2025-06-30T14:22:59.650576Z","steps":["trace[806257844] 'process raft request'  (duration: 113.707335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"485.393683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650888Z","caller":"traceutil/trace.go:171","msg":"trace[707366630] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1385; }","duration":"486.585604ms","start":"2025-06-30T14:22:59.164295Z","end":"2025-06-30T14:22:59.650881Z","steps":["trace[707366630] 'range keys from in-memory index tree'  (duration: 485.334873ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.650922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.164282Z","time spent":"486.621786ms","remote":"127.0.0.1:55612","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"374.09899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651010Z","caller":"traceutil/trace.go:171","msg":"trace[926388769] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"375.285797ms","start":"2025-06-30T14:22:59.275719Z","end":"2025-06-30T14:22:59.651005Z","steps":["trace[926388769] 'range keys from in-memory index tree'  (duration: 374.055569ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.275706Z","time spent":"375.316283ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.573265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651095Z","caller":"traceutil/trace.go:171","msg":"trace[444156936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"374.826279ms","start":"2025-06-30T14:22:59.276264Z","end":"2025-06-30T14:22:59.651090Z","steps":["trace[444156936] 'range keys from in-memory index tree'  (duration: 373.54342ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.276255Z","time spent":"374.850773ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.221471ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651162Z","caller":"traceutil/trace.go:171","msg":"trace[72079455] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1385; }","duration":"136.411789ms","start":"2025-06-30T14:22:59.514744Z","end":"2025-06-30T14:22:59.651156Z","steps":["trace[72079455] 'range keys from in-memory index tree'  (duration: 135.196228ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:25:50.156282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.241875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-06-30T14:25:50.156408Z","caller":"traceutil/trace.go:171","msg":"trace[1656189336] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1889; }","duration":"105.429353ms","start":"2025-06-30T14:25:50.050958Z","end":"2025-06-30T14:25:50.156387Z","steps":["trace[1656189336] 'range keys from in-memory index tree'  (duration: 105.167742ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:28:59.297152Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1538}
	{"level":"info","ts":"2025-06-30T14:28:59.333481Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1538,"took":"35.184312ms","hash":3459685430,"current-db-size-bytes":7704576,"current-db-size":"7.7 MB","current-db-size-in-use-bytes":4759552,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2025-06-30T14:28:59.333691Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3459685430,"revision":1538,"compact-revision":-1}
	
	
	==> kernel <==
	 14:32:16 up 13 min,  0 users,  load average: 0.28, 0.51, 0.53
	Linux addons-301682 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a] <==
	I0630 14:20:17.020266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0630 14:20:17.020272       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0630 14:20:30.566598       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.249.255:443: connect: connection refused" logger="UnhandledError"
	W0630 14:20:30.568692       1 handler_proxy.go:99] no RequestInfo found in the context
	E0630 14:20:30.568788       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0630 14:20:30.592794       1 handler.go:288] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0630 14:20:30.602722       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0630 14:25:32.039384       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43658: use of closed network connection
	E0630 14:25:32.235328       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43690: use of closed network connection
	I0630 14:25:35.327796       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:40.911437       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0630 14:25:41.137079       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.71.181"}
	I0630 14:25:41.142822       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:41.721263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.215.125"}
	I0630 14:25:47.346218       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:26:31.606219       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0630 14:27:03.338971       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:27:51.135976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:00.946999       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:59.400677       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:31:51.314031       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0630 14:31:52.350724       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0] <==
	E0630 14:28:33.141336       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.240744       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.265499       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.301094       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.355413       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.455220       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.632282       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:33.965912       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:34.621606       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:35.919826       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:38.493394       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:43.650839       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:28:53.905832       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0630 14:29:12.559067       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	I0630 14:29:58.729598       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E0630 14:31:52.352989       1 reflector.go:200] "Failed to watch" err="the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:31:53.610998       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:31:55.274424       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:32:00.912894       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0630 14:32:01.568174       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0630 14:32:08.180369       1 shared_informer.go:350] "Waiting for caches to sync" controller="resource quota"
	I0630 14:32:08.181192       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:32:08.623346       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0630 14:32:08.623448       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0630 14:32:12.901188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b] <==
	E0630 14:19:09.616075       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:19:09.628197       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0630 14:19:09.628280       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:19:09.728584       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:19:09.728641       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:19:09.728663       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:19:09.760004       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:19:09.760419       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:19:09.760431       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:19:09.761800       1 config.go:199] "Starting service config controller"
	I0630 14:19:09.761820       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:19:09.764743       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:19:09.764796       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:19:09.764830       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:19:09.764834       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:19:09.770113       1 config.go:329] "Starting node config controller"
	I0630 14:19:09.770142       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:19:09.862889       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:19:09.865227       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:19:09.865265       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:19:09.870697       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627] <==
	E0630 14:19:00.996185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:19:00.996326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:19:00.996316       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:00.996403       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:00.996618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:00.998826       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:00.999006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:01.002700       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.002834       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:01.865362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:01.884714       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.908759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:01.937379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:19:01.938367       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:01.983087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:19:02.032891       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:19:02.058487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:19:02.131893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:02.191157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:02.310584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:02.326588       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:02.381605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0630 14:19:04.769814       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.837129    1543 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f033c8a2-1ce7-4009-8b24-756b9f31550e-bpffs" (OuterVolumeSpecName: "bpffs") pod "f033c8a2-1ce7-4009-8b24-756b9f31550e" (UID: "f033c8a2-1ce7-4009-8b24-756b9f31550e"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.840874    1543 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f033c8a2-1ce7-4009-8b24-756b9f31550e-kube-api-access-95d69" (OuterVolumeSpecName: "kube-api-access-95d69") pod "f033c8a2-1ce7-4009-8b24-756b9f31550e" (UID: "f033c8a2-1ce7-4009-8b24-756b9f31550e"). InnerVolumeSpecName "kube-api-access-95d69". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937137    1543 reconciler_common.go:299] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f033c8a2-1ce7-4009-8b24-756b9f31550e-run\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937189    1543 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-95d69\" (UniqueName: \"kubernetes.io/projected/f033c8a2-1ce7-4009-8b24-756b9f31550e-kube-api-access-95d69\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937200    1543 reconciler_common.go:299] "Volume detached for volume \"usr\" (UniqueName: \"kubernetes.io/host-path/f033c8a2-1ce7-4009-8b24-756b9f31550e-usr\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937208    1543 reconciler_common.go:299] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f033c8a2-1ce7-4009-8b24-756b9f31550e-bpffs\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937220    1543 reconciler_common.go:299] "Volume detached for volume \"opt\" (UniqueName: \"kubernetes.io/host-path/f033c8a2-1ce7-4009-8b24-756b9f31550e-opt\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937229    1543 reconciler_common.go:299] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/f033c8a2-1ce7-4009-8b24-756b9f31550e-debugfs\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937236    1543 reconciler_common.go:299] "Volume detached for volume \"oci\" (UniqueName: \"kubernetes.io/empty-dir/f033c8a2-1ce7-4009-8b24-756b9f31550e-oci\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937242    1543 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f033c8a2-1ce7-4009-8b24-756b9f31550e-config\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937249    1543 reconciler_common.go:299] "Volume detached for volume \"var\" (UniqueName: \"kubernetes.io/host-path/f033c8a2-1ce7-4009-8b24-756b9f31550e-var\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:51 addons-301682 kubelet[1543]: I0630 14:31:51.937257    1543 reconciler_common.go:299] "Volume detached for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/f033c8a2-1ce7-4009-8b24-756b9f31550e-proc\") on node \"addons-301682\" DevicePath \"\""
	Jun 30 14:31:52 addons-301682 kubelet[1543]: I0630 14:31:52.505884    1543 scope.go:117] "RemoveContainer" containerID="608862faed0c0e6f5206c70142f3721860dd5dc22f544d982e850e234de5a7f0"
	Jun 30 14:31:53 addons-301682 kubelet[1543]: I0630 14:31:53.706121    1543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f033c8a2-1ce7-4009-8b24-756b9f31550e" path="/var/lib/kubelet/pods/f033c8a2-1ce7-4009-8b24-756b9f31550e/volumes"
	Jun 30 14:31:54 addons-301682 kubelet[1543]: E0630 14:31:54.139940    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293914139394946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:31:54 addons-301682 kubelet[1543]: E0630 14:31:54.140081    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293914139394946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:32:01 addons-301682 kubelet[1543]: I0630 14:32:01.701195    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-x8cnn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:32:01 addons-301682 kubelet[1543]: E0630 14:32:01.703226    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:32:04 addons-301682 kubelet[1543]: E0630 14:32:04.142133    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293924141814786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:32:04 addons-301682 kubelet[1543]: E0630 14:32:04.142172    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293924141814786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:32:12 addons-301682 kubelet[1543]: I0630 14:32:12.696086    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:32:12 addons-301682 kubelet[1543]: I0630 14:32:12.696216    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-x8cnn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:32:12 addons-301682 kubelet[1543]: E0630 14:32:12.698051    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:32:14 addons-301682 kubelet[1543]: E0630 14:32:14.146432    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293934145798686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:32:14 addons-301682 kubelet[1543]: E0630 14:32:14.146477    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293934145798686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4] <==
	W0630 14:31:52.082020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:54.085317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:54.094822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:56.098269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:56.106667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:58.110958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:31:58.120586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:00.124156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:00.130346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:02.133065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:02.141838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:04.154241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:04.168874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:06.171931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:06.178167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:08.182272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:08.187706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:10.190417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:10.199395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:12.202566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:12.208237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:14.211125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:14.216445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:16.219350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:32:16.227156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301682 -n addons-301682
helpers_test.go:261: (dbg) Run:  kubectl --context addons-301682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0: exit status 1 (104.009537ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301682/192.168.39.227
	Start Time:       Mon, 30 Jun 2025 14:25:41 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9gdz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9gdz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m37s                  default-scheduler  Successfully assigned default/nginx to addons-301682
	  Warning  Failed     5m28s                  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m10s (x3 over 5m28s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m10s (x2 over 4m19s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    104s (x4 over 5m27s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     104s (x4 over 5m27s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    89s (x4 over 6m37s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301682/192.168.39.227
	Start Time:       Mon, 30 Jun 2025 14:30:11 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcnmb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-jcnmb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  2m7s                default-scheduler  Successfully assigned default/task-pv-pod to addons-301682
	  Warning  Failed     60s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:29cf9892ca1103e0b8c97db86f819fac1d9457b176bc77dd4f18ed2da4dd159f in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     60s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    59s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     59s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    47s (x2 over 2m6s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6l844 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-6l844:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fnqjq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9xc5z" not found
	Error from server (NotFound): pods "registry-694bd45846-x8cnn" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-301682 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.865510741s)
--- FAIL: TestAddons/parallel/LocalPath (345.84s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (246.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-cwpg5" [e0aa6e49-e49a-4e94-91b2-b39ae4dfd5ee] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
helpers_test.go:329: TestAddons/parallel/Yakd: WARNING: pod list for "yakd-dashboard" "app.kubernetes.io/name=yakd-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:1047: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301682 -n addons-301682
addons_test.go:1047: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-06-30 14:27:47.729825257 +0000 UTC m=+610.817706060
addons_test.go:1047: (dbg) Run:  kubectl --context addons-301682 describe po yakd-dashboard-575dd5996b-cwpg5 -n yakd-dashboard
addons_test.go:1047: (dbg) kubectl --context addons-301682 describe po yakd-dashboard-575dd5996b-cwpg5 -n yakd-dashboard:
Name:             yakd-dashboard-575dd5996b-cwpg5
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-301682/192.168.39.227
Start Time:       Mon, 30 Jun 2025 14:19:15 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=575dd5996b
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/yakd-dashboard-575dd5996b
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-575dd5996b-cwpg5 (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6m55c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6m55c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  8m32s                 default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-575dd5996b-cwpg5 to addons-301682
Warning  Failed     3m5s                  kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": fetching target platform image selected from image index: reading manifest sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     104s (x3 over 6m38s)  kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     104s (x4 over 6m38s)  kubelet            Error: ErrImagePull
Normal   BackOff    32s (x11 over 6m38s)  kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     32s (x11 over 6m38s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    19s (x5 over 8m27s)   kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
addons_test.go:1047: (dbg) Run:  kubectl --context addons-301682 logs yakd-dashboard-575dd5996b-cwpg5 -n yakd-dashboard
addons_test.go:1047: (dbg) Non-zero exit: kubectl --context addons-301682 logs yakd-dashboard-575dd5996b-cwpg5 -n yakd-dashboard: exit status 1 (76.798273ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-575dd5996b-cwpg5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1047: kubectl --context addons-301682 logs yakd-dashboard-575dd5996b-cwpg5 -n yakd-dashboard: exit status 1
addons_test.go:1048: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-301682 -n addons-301682
helpers_test.go:244: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 logs -n 25: (1.581520604s)
helpers_test.go:252: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:17 UTC |                     |
	|         | -p download-only-777401              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | -o=json --download-only              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | -p download-only-781147              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401              | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-781147              | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | --download-only -p                   | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | binary-mirror-095233                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44619               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-095233              | binary-mirror-095233 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| addons  | disable dashboard -p                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | addons-301682                        |                      |         |         |                     |                     |
	| start   | -p addons-301682 --wait=true         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:25 UTC |
	|         | --memory=4096 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=registry-creds              |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | -p addons-301682                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:25 UTC | 30 Jun 25 14:25 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-301682 addons disable         | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-301682 addons                 | addons-301682        | jenkins | v1.36.0 | 30 Jun 25 14:27 UTC | 30 Jun 25 14:27 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:18:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:18:18.914659 1558425 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:18:18.914940 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.914950 1558425 out.go:358] Setting ErrFile to fd 2...
	I0630 14:18:18.914954 1558425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:18.915163 1558425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:18:18.915795 1558425 out.go:352] Setting JSON to false
	I0630 14:18:18.916730 1558425 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":28791,"bootTime":1751264308,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:18:18.916865 1558425 start.go:140] virtualization: kvm guest
	I0630 14:18:18.918804 1558425 out.go:177] * [addons-301682] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:18:18.920591 1558425 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:18:18.920596 1558425 notify.go:220] Checking for updates...
	I0630 14:18:18.923430 1558425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:18:18.924993 1558425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:18:18.926449 1558425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:18.927916 1558425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:18:18.929158 1558425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:18:18.930609 1558425 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:18:18.965828 1558425 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 14:18:18.967229 1558425 start.go:304] selected driver: kvm2
	I0630 14:18:18.967249 1558425 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:18:18.967260 1558425 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:18:18.968055 1558425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.968161 1558425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:18:18.984884 1558425 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:18:18.984967 1558425 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:18:18.985269 1558425 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:18:18.985311 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:18.985360 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:18.985373 1558425 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:18:18.985492 1558425 start.go:347] cluster config:
	{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0630 14:18:18.985616 1558425 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:18.987784 1558425 out.go:177] * Starting "addons-301682" primary control-plane node in "addons-301682" cluster
	I0630 14:18:18.989175 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:18.989236 1558425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 14:18:18.989252 1558425 cache.go:56] Caching tarball of preloaded images
	I0630 14:18:18.989351 1558425 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 14:18:18.989366 1558425 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 14:18:18.989808 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:18.989840 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json: {Name:mk0b97369f17da476cd2a8393ae45d3ce84c94a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:18.990016 1558425 start.go:360] acquireMachinesLock for addons-301682: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 14:18:18.990075 1558425 start.go:364] duration metric: took 40.808µs to acquireMachinesLock for "addons-301682"
	I0630 14:18:18.990091 1558425 start.go:93] Provisioning new machine with config: &{Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:18:18.990156 1558425 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 14:18:18.992039 1558425 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0630 14:18:18.992210 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:18:18.992268 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:18:19.009360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0630 14:18:19.009944 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:18:19.010513 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:18:19.010538 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:18:19.010965 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:18:19.011233 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:19.011437 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:19.011652 1558425 start.go:159] libmachine.API.Create for "addons-301682" (driver="kvm2")
	I0630 14:18:19.011686 1558425 client.go:168] LocalClient.Create starting
	I0630 14:18:19.011737 1558425 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 14:18:19.156936 1558425 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 14:18:19.413430 1558425 main.go:141] libmachine: Running pre-create checks...
	I0630 14:18:19.413459 1558425 main.go:141] libmachine: (addons-301682) Calling .PreCreateCheck
	I0630 14:18:19.414009 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:19.414492 1558425 main.go:141] libmachine: Creating machine...
	I0630 14:18:19.414509 1558425 main.go:141] libmachine: (addons-301682) Calling .Create
	I0630 14:18:19.414658 1558425 main.go:141] libmachine: (addons-301682) creating KVM machine...
	I0630 14:18:19.414680 1558425 main.go:141] libmachine: (addons-301682) creating network...
	I0630 14:18:19.416107 1558425 main.go:141] libmachine: (addons-301682) DBG | found existing default KVM network
	I0630 14:18:19.416967 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.416813 1558447 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236b0}
	I0630 14:18:19.417027 1558425 main.go:141] libmachine: (addons-301682) DBG | created network xml: 
	I0630 14:18:19.417047 1558425 main.go:141] libmachine: (addons-301682) DBG | <network>
	I0630 14:18:19.417058 1558425 main.go:141] libmachine: (addons-301682) DBG |   <name>mk-addons-301682</name>
	I0630 14:18:19.417065 1558425 main.go:141] libmachine: (addons-301682) DBG |   <dns enable='no'/>
	I0630 14:18:19.417074 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417083 1558425 main.go:141] libmachine: (addons-301682) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 14:18:19.417095 1558425 main.go:141] libmachine: (addons-301682) DBG |     <dhcp>
	I0630 14:18:19.417105 1558425 main.go:141] libmachine: (addons-301682) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 14:18:19.417114 1558425 main.go:141] libmachine: (addons-301682) DBG |     </dhcp>
	I0630 14:18:19.417134 1558425 main.go:141] libmachine: (addons-301682) DBG |   </ip>
	I0630 14:18:19.417161 1558425 main.go:141] libmachine: (addons-301682) DBG |   
	I0630 14:18:19.417196 1558425 main.go:141] libmachine: (addons-301682) DBG | </network>
	I0630 14:18:19.417211 1558425 main.go:141] libmachine: (addons-301682) DBG | 
	I0630 14:18:19.422966 1558425 main.go:141] libmachine: (addons-301682) DBG | trying to create private KVM network mk-addons-301682 192.168.39.0/24...
	I0630 14:18:19.504039 1558425 main.go:141] libmachine: (addons-301682) DBG | private KVM network mk-addons-301682 192.168.39.0/24 created
	I0630 14:18:19.504091 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.503994 1558447 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.504105 1558425 main.go:141] libmachine: (addons-301682) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.504121 1558425 main.go:141] libmachine: (addons-301682) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:18:19.504170 1558425 main.go:141] libmachine: (addons-301682) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 14:18:19.852642 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.852518 1558447 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa...
	I0630 14:18:19.994685 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994513 1558447 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk...
	I0630 14:18:19.994718 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing magic tar header
	I0630 14:18:19.994732 1558425 main.go:141] libmachine: (addons-301682) DBG | Writing SSH key tar header
	I0630 14:18:19.994739 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:19.994653 1558447 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 ...
	I0630 14:18:19.994842 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682
	I0630 14:18:19.994876 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 14:18:19.994890 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682 (perms=drwx------)
	I0630 14:18:19.994904 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 14:18:19.994914 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 14:18:19.994928 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 14:18:19.994937 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 14:18:19.994950 1558425 main.go:141] libmachine: (addons-301682) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 14:18:19.994964 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:19.994974 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:19.994989 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 14:18:19.994999 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 14:18:19.995008 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home/jenkins
	I0630 14:18:19.995017 1558425 main.go:141] libmachine: (addons-301682) DBG | checking permissions on dir: /home
	I0630 14:18:19.995028 1558425 main.go:141] libmachine: (addons-301682) DBG | skipping /home - not owner
	I0630 14:18:19.996388 1558425 main.go:141] libmachine: (addons-301682) define libvirt domain using xml: 
	I0630 14:18:19.996417 1558425 main.go:141] libmachine: (addons-301682) <domain type='kvm'>
	I0630 14:18:19.996424 1558425 main.go:141] libmachine: (addons-301682)   <name>addons-301682</name>
	I0630 14:18:19.996429 1558425 main.go:141] libmachine: (addons-301682)   <memory unit='MiB'>4096</memory>
	I0630 14:18:19.996434 1558425 main.go:141] libmachine: (addons-301682)   <vcpu>2</vcpu>
	I0630 14:18:19.996437 1558425 main.go:141] libmachine: (addons-301682)   <features>
	I0630 14:18:19.996441 1558425 main.go:141] libmachine: (addons-301682)     <acpi/>
	I0630 14:18:19.996445 1558425 main.go:141] libmachine: (addons-301682)     <apic/>
	I0630 14:18:19.996450 1558425 main.go:141] libmachine: (addons-301682)     <pae/>
	I0630 14:18:19.996454 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996496 1558425 main.go:141] libmachine: (addons-301682)   </features>
	I0630 14:18:19.996523 1558425 main.go:141] libmachine: (addons-301682)   <cpu mode='host-passthrough'>
	I0630 14:18:19.996559 1558425 main.go:141] libmachine: (addons-301682)   
	I0630 14:18:19.996579 1558425 main.go:141] libmachine: (addons-301682)   </cpu>
	I0630 14:18:19.996596 1558425 main.go:141] libmachine: (addons-301682)   <os>
	I0630 14:18:19.996607 1558425 main.go:141] libmachine: (addons-301682)     <type>hvm</type>
	I0630 14:18:19.996615 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='cdrom'/>
	I0630 14:18:19.996623 1558425 main.go:141] libmachine: (addons-301682)     <boot dev='hd'/>
	I0630 14:18:19.996628 1558425 main.go:141] libmachine: (addons-301682)     <bootmenu enable='no'/>
	I0630 14:18:19.996634 1558425 main.go:141] libmachine: (addons-301682)   </os>
	I0630 14:18:19.996639 1558425 main.go:141] libmachine: (addons-301682)   <devices>
	I0630 14:18:19.996646 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='cdrom'>
	I0630 14:18:19.996654 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/boot2docker.iso'/>
	I0630 14:18:19.996661 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hdc' bus='scsi'/>
	I0630 14:18:19.996666 1558425 main.go:141] libmachine: (addons-301682)       <readonly/>
	I0630 14:18:19.996672 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996677 1558425 main.go:141] libmachine: (addons-301682)     <disk type='file' device='disk'>
	I0630 14:18:19.996687 1558425 main.go:141] libmachine: (addons-301682)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 14:18:19.996710 1558425 main.go:141] libmachine: (addons-301682)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/addons-301682.rawdisk'/>
	I0630 14:18:19.996729 1558425 main.go:141] libmachine: (addons-301682)       <target dev='hda' bus='virtio'/>
	I0630 14:18:19.996742 1558425 main.go:141] libmachine: (addons-301682)     </disk>
	I0630 14:18:19.996753 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996766 1558425 main.go:141] libmachine: (addons-301682)       <source network='mk-addons-301682'/>
	I0630 14:18:19.996777 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996786 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996796 1558425 main.go:141] libmachine: (addons-301682)     <interface type='network'>
	I0630 14:18:19.996808 1558425 main.go:141] libmachine: (addons-301682)       <source network='default'/>
	I0630 14:18:19.996821 1558425 main.go:141] libmachine: (addons-301682)       <model type='virtio'/>
	I0630 14:18:19.996847 1558425 main.go:141] libmachine: (addons-301682)     </interface>
	I0630 14:18:19.996868 1558425 main.go:141] libmachine: (addons-301682)     <serial type='pty'>
	I0630 14:18:19.996884 1558425 main.go:141] libmachine: (addons-301682)       <target port='0'/>
	I0630 14:18:19.996899 1558425 main.go:141] libmachine: (addons-301682)     </serial>
	I0630 14:18:19.996909 1558425 main.go:141] libmachine: (addons-301682)     <console type='pty'>
	I0630 14:18:19.996918 1558425 main.go:141] libmachine: (addons-301682)       <target type='serial' port='0'/>
	I0630 14:18:19.996928 1558425 main.go:141] libmachine: (addons-301682)     </console>
	I0630 14:18:19.996938 1558425 main.go:141] libmachine: (addons-301682)     <rng model='virtio'>
	I0630 14:18:19.996951 1558425 main.go:141] libmachine: (addons-301682)       <backend model='random'>/dev/random</backend>
	I0630 14:18:19.996962 1558425 main.go:141] libmachine: (addons-301682)     </rng>
	I0630 14:18:19.996969 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996980 1558425 main.go:141] libmachine: (addons-301682)     
	I0630 14:18:19.996990 1558425 main.go:141] libmachine: (addons-301682)   </devices>
	I0630 14:18:19.997056 1558425 main.go:141] libmachine: (addons-301682) </domain>
	I0630 14:18:19.997083 1558425 main.go:141] libmachine: (addons-301682) 
	I0630 14:18:20.002436 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:4a:da:84 in network default
	I0630 14:18:20.002966 1558425 main.go:141] libmachine: (addons-301682) starting domain...
	I0630 14:18:20.002981 1558425 main.go:141] libmachine: (addons-301682) ensuring networks are active...
	I0630 14:18:20.002988 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:20.003928 1558425 main.go:141] libmachine: (addons-301682) Ensuring network default is active
	I0630 14:18:20.004377 1558425 main.go:141] libmachine: (addons-301682) Ensuring network mk-addons-301682 is active
	I0630 14:18:20.004924 1558425 main.go:141] libmachine: (addons-301682) getting domain XML...
	I0630 14:18:20.006331 1558425 main.go:141] libmachine: (addons-301682) creating domain...
	I0630 14:18:21.490289 1558425 main.go:141] libmachine: (addons-301682) waiting for IP...
	I0630 14:18:21.491154 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.491628 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.491677 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.491627 1558447 retry.go:31] will retry after 227.981696ms: waiting for domain to come up
	I0630 14:18:21.721263 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:21.721780 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:21.721803 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:21.721737 1558447 retry.go:31] will retry after 379.046975ms: waiting for domain to come up
	I0630 14:18:22.102468 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.102921 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.102946 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.102870 1558447 retry.go:31] will retry after 342.349164ms: waiting for domain to come up
	I0630 14:18:22.446573 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.446984 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.447028 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.446972 1558447 retry.go:31] will retry after 471.24813ms: waiting for domain to come up
	I0630 14:18:22.920211 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:22.920789 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:22.920882 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:22.920792 1558447 retry.go:31] will retry after 708.674729ms: waiting for domain to come up
	I0630 14:18:23.631552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:23.632140 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:23.632158 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:23.632083 1558447 retry.go:31] will retry after 832.667186ms: waiting for domain to come up
	I0630 14:18:24.466597 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:24.467128 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:24.467188 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:24.467084 1558447 retry.go:31] will retry after 1.046318752s: waiting for domain to come up
	I0630 14:18:25.514952 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:25.515439 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:25.515467 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:25.515417 1558447 retry.go:31] will retry after 1.194063503s: waiting for domain to come up
	I0630 14:18:26.712109 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:26.712668 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:26.712736 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:26.712627 1558447 retry.go:31] will retry after 1.248422127s: waiting for domain to come up
	I0630 14:18:27.962423 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:27.962871 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:27.962904 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:27.962823 1558447 retry.go:31] will retry after 2.035519816s: waiting for domain to come up
	I0630 14:18:29.999626 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:30.000023 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:30.000122 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:30.000029 1558447 retry.go:31] will retry after 2.163487066s: waiting for domain to come up
	I0630 14:18:32.164834 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:32.165260 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:32.165289 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:32.165193 1558447 retry.go:31] will retry after 2.715279658s: waiting for domain to come up
	I0630 14:18:34.882095 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:34.882613 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:34.882651 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:34.882566 1558447 retry.go:31] will retry after 4.101409574s: waiting for domain to come up
	I0630 14:18:38.986670 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:38.987057 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find current IP address of domain addons-301682 in network mk-addons-301682
	I0630 14:18:38.987115 1558425 main.go:141] libmachine: (addons-301682) DBG | I0630 14:18:38.987021 1558447 retry.go:31] will retry after 4.770477957s: waiting for domain to come up
	I0630 14:18:43.763775 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764289 1558425 main.go:141] libmachine: (addons-301682) found domain IP: 192.168.39.227
	I0630 14:18:43.764317 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has current primary IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.764323 1558425 main.go:141] libmachine: (addons-301682) reserving static IP address...
	I0630 14:18:43.764708 1558425 main.go:141] libmachine: (addons-301682) DBG | unable to find host DHCP lease matching {name: "addons-301682", mac: "52:54:00:83:16:36", ip: "192.168.39.227"} in network mk-addons-301682
	I0630 14:18:43.852639 1558425 main.go:141] libmachine: (addons-301682) reserved static IP address 192.168.39.227 for domain addons-301682
	I0630 14:18:43.852672 1558425 main.go:141] libmachine: (addons-301682) DBG | Getting to WaitForSSH function...
	I0630 14:18:43.852679 1558425 main.go:141] libmachine: (addons-301682) waiting for SSH...
	I0630 14:18:43.855466 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855863 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.855913 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.855970 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH client type: external
	I0630 14:18:43.856034 1558425 main.go:141] libmachine: (addons-301682) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa (-rw-------)
	I0630 14:18:43.856089 1558425 main.go:141] libmachine: (addons-301682) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 14:18:43.856119 1558425 main.go:141] libmachine: (addons-301682) DBG | About to run SSH command:
	I0630 14:18:43.856137 1558425 main.go:141] libmachine: (addons-301682) DBG | exit 0
	I0630 14:18:43.981627 1558425 main.go:141] libmachine: (addons-301682) DBG | SSH cmd err, output: <nil>: 
	I0630 14:18:43.981928 1558425 main.go:141] libmachine: (addons-301682) KVM machine creation complete
	I0630 14:18:43.982338 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:43.982966 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983226 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:43.983462 1558425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 14:18:43.983477 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:18:43.984862 1558425 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 14:18:43.984878 1558425 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 14:18:43.984885 1558425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 14:18:43.984892 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:43.987532 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.987932 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:43.987959 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:43.988068 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:43.988288 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988434 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:43.988572 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:43.988711 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:43.988940 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:43.988950 1558425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 14:18:44.093060 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.093094 1558425 main.go:141] libmachine: Detecting the provisioner...
	I0630 14:18:44.093103 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.096339 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096697 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.096721 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.096934 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.097182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097449 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.097610 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.097843 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.098060 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.098080 1558425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 14:18:44.202824 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 14:18:44.202946 1558425 main.go:141] libmachine: found compatible host: buildroot
	I0630 14:18:44.202959 1558425 main.go:141] libmachine: Provisioning with buildroot...
	I0630 14:18:44.202967 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203257 1558425 buildroot.go:166] provisioning hostname "addons-301682"
	I0630 14:18:44.203283 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.203500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.206655 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.206965 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.206989 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.207261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.207476 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207654 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.207765 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.207928 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.208172 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.208189 1558425 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-301682 && echo "addons-301682" | sudo tee /etc/hostname
	I0630 14:18:44.326076 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-301682
	
	I0630 14:18:44.326120 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.329781 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330236 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.330271 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.330493 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.330780 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331000 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.331147 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.331319 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.331561 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.331583 1558425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-301682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-301682/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-301682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 14:18:44.442815 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:18:44.442853 1558425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 14:18:44.442872 1558425 buildroot.go:174] setting up certificates
	I0630 14:18:44.442886 1558425 provision.go:84] configureAuth start
	I0630 14:18:44.442963 1558425 main.go:141] libmachine: (addons-301682) Calling .GetMachineName
	I0630 14:18:44.443427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:44.446591 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447120 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.447146 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.447411 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.449967 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450292 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.450314 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.450474 1558425 provision.go:143] copyHostCerts
	I0630 14:18:44.450577 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 14:18:44.450730 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 14:18:44.450832 1558425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 14:18:44.450922 1558425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.addons-301682 san=[127.0.0.1 192.168.39.227 addons-301682 localhost minikube]
	I0630 14:18:44.669777 1558425 provision.go:177] copyRemoteCerts
	I0630 14:18:44.669866 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 14:18:44.669906 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.673124 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673495 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.673530 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.673760 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.674080 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.674291 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.674517 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:44.758379 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 14:18:44.788885 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 14:18:44.817666 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 14:18:44.847039 1558425 provision.go:87] duration metric: took 404.122435ms to configureAuth
	I0630 14:18:44.847076 1558425 buildroot.go:189] setting minikube options for container-runtime
	I0630 14:18:44.847582 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:18:44.847720 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:44.850359 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.850971 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:44.850998 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:44.851240 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:44.851500 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851706 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:44.851871 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:44.852084 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:44.852306 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:44.852322 1558425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 14:18:45.094141 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 14:18:45.094172 1558425 main.go:141] libmachine: Checking connection to Docker...
	I0630 14:18:45.094182 1558425 main.go:141] libmachine: (addons-301682) Calling .GetURL
	I0630 14:18:45.095525 1558425 main.go:141] libmachine: (addons-301682) DBG | using libvirt version 6000000
	I0630 14:18:45.097995 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098457 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.098484 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.098973 1558425 main.go:141] libmachine: Docker is up and running!
	I0630 14:18:45.098988 1558425 main.go:141] libmachine: Reticulating splines...
	I0630 14:18:45.098996 1558425 client.go:171] duration metric: took 26.087298039s to LocalClient.Create
	I0630 14:18:45.099029 1558425 start.go:167] duration metric: took 26.087375233s to libmachine.API.Create "addons-301682"
	I0630 14:18:45.099043 1558425 start.go:293] postStartSetup for "addons-301682" (driver="kvm2")
	I0630 14:18:45.099058 1558425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 14:18:45.099080 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.099385 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 14:18:45.099417 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.103070 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103476 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.103519 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.103738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.103974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.104154 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.104348 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.190062 1558425 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 14:18:45.194479 1558425 info.go:137] Remote host: Buildroot 2025.02
	I0630 14:18:45.194513 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 14:18:45.194584 1558425 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 14:18:45.194617 1558425 start.go:296] duration metric: took 95.564885ms for postStartSetup
	I0630 14:18:45.194655 1558425 main.go:141] libmachine: (addons-301682) Calling .GetConfigRaw
	I0630 14:18:45.195269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.198414 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.198916 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.198937 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.199225 1558425 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/config.json ...
	I0630 14:18:45.199414 1558425 start.go:128] duration metric: took 26.209245344s to createHost
	I0630 14:18:45.199439 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.202677 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203657 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.203683 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.203917 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.204167 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204389 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.204594 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.204750 1558425 main.go:141] libmachine: Using SSH client type: native
	I0630 14:18:45.204952 1558425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0630 14:18:45.204962 1558425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 14:18:45.310482 1558425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751293125.283428942
	
	I0630 14:18:45.310513 1558425 fix.go:216] guest clock: 1751293125.283428942
	I0630 14:18:45.310540 1558425 fix.go:229] Guest: 2025-06-30 14:18:45.283428942 +0000 UTC Remote: 2025-06-30 14:18:45.199427216 +0000 UTC m=+26.326566099 (delta=84.001726ms)
	I0630 14:18:45.310570 1558425 fix.go:200] guest clock delta is within tolerance: 84.001726ms
	I0630 14:18:45.310578 1558425 start.go:83] releasing machines lock for "addons-301682", held for 26.320495243s
	I0630 14:18:45.310656 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.310928 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:45.313785 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314207 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.314241 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.314506 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315123 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315340 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:18:45.315461 1558425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 14:18:45.315505 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.315646 1558425 ssh_runner.go:195] Run: cat /version.json
	I0630 14:18:45.315683 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:18:45.318925 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319155 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319563 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319594 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:45.319617 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319643 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:45.319788 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.319877 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:18:45.320031 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320110 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:18:45.320304 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320317 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:18:45.320442 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.320501 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:18:45.399981 1558425 ssh_runner.go:195] Run: systemctl --version
	I0630 14:18:45.435607 1558425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 14:18:45.595593 1558425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 14:18:45.602291 1558425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 14:18:45.602374 1558425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 14:18:45.622229 1558425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 14:18:45.622263 1558425 start.go:495] detecting cgroup driver to use...
	I0630 14:18:45.622333 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 14:18:45.641226 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 14:18:45.658995 1558425 docker.go:230] disabling cri-docker service (if available) ...
	I0630 14:18:45.659074 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 14:18:45.675308 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 14:18:45.691780 1558425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 14:18:45.844773 1558425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 14:18:46.002067 1558425 docker.go:246] disabling docker service ...
	I0630 14:18:46.002163 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 14:18:46.018486 1558425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 14:18:46.032711 1558425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 14:18:46.215507 1558425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 14:18:46.345437 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 14:18:46.361241 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 14:18:46.382182 1558425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 14:18:46.382265 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.393781 1558425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 14:18:46.393858 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.404879 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.415753 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.427101 1558425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 14:18:46.439585 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.450640 1558425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.469657 1558425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 14:18:46.480995 1558425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 14:18:46.490960 1558425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 14:18:46.491038 1558425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 14:18:46.506162 1558425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 14:18:46.516885 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:46.649290 1558425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 14:18:46.754804 1558425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 14:18:46.754924 1558425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 14:18:46.760277 1558425 start.go:563] Will wait 60s for crictl version
	I0630 14:18:46.760374 1558425 ssh_runner.go:195] Run: which crictl
	I0630 14:18:46.764622 1558425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 14:18:46.806540 1558425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 14:18:46.806668 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.835571 1558425 ssh_runner.go:195] Run: crio --version
	I0630 14:18:46.870294 1558425 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 14:18:46.871793 1558425 main.go:141] libmachine: (addons-301682) Calling .GetIP
	I0630 14:18:46.874897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875281 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:18:46.875316 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:18:46.875568 1558425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 14:18:46.880040 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:46.893844 1558425 kubeadm.go:875] updating cluster {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301
682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 14:18:46.894040 1558425 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:46.894098 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:46.928051 1558425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 14:18:46.928142 1558425 ssh_runner.go:195] Run: which lz4
	I0630 14:18:46.932106 1558425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 14:18:46.936459 1558425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 14:18:46.936498 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 14:18:48.250677 1558425 crio.go:462] duration metric: took 1.318609473s to copy over tarball
	I0630 14:18:48.250794 1558425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 14:18:50.229636 1558425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978807649s)
	I0630 14:18:50.229688 1558425 crio.go:469] duration metric: took 1.978978941s to extract the tarball
	I0630 14:18:50.229696 1558425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 14:18:50.268804 1558425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:18:50.313787 1558425 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 14:18:50.313824 1558425 cache_images.go:84] Images are preloaded, skipping loading
	I0630 14:18:50.313836 1558425 kubeadm.go:926] updating node { 192.168.39.227 8443 v1.33.2 crio true true} ...
	I0630 14:18:50.313984 1558425 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-301682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:addons-301682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 14:18:50.314108 1558425 ssh_runner.go:195] Run: crio config
	I0630 14:18:50.358762 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:18:50.358788 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:50.358799 1558425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 14:18:50.358821 1558425 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-301682 NodeName:addons-301682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 14:18:50.358985 1558425 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-301682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 14:18:50.359075 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 14:18:50.370269 1558425 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 14:18:50.370359 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 14:18:50.381422 1558425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0630 14:18:50.402864 1558425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 14:18:50.423535 1558425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0630 14:18:50.443802 1558425 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0630 14:18:50.448073 1558425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:18:50.462771 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:18:50.610565 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:18:50.641674 1558425 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682 for IP: 192.168.39.227
	I0630 14:18:50.641703 1558425 certs.go:194] generating shared ca certs ...
	I0630 14:18:50.641726 1558425 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.641917 1558425 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 14:18:50.775973 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt ...
	I0630 14:18:50.776127 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt: {Name:mk4a7e2f23df1877aa667a5fe9d149d87fa65b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776340 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key ...
	I0630 14:18:50.776353 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key: {Name:mkfe815a12ae8eded146419f42722ed747bb8cb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:50.776428 1558425 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 14:18:51.239699 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt ...
	I0630 14:18:51.239736 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt: {Name:mk010f91985630538e2436d654ff5b4cc759ab0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.239913 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key ...
	I0630 14:18:51.239969 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key: {Name:mk7a36f8a28748533897dd07634d8a5fe44a63a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.240059 1558425 certs.go:256] generating profile certs ...
	I0630 14:18:51.240131 1558425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key
	I0630 14:18:51.240150 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt with IP's: []
	I0630 14:18:51.635887 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt ...
	I0630 14:18:51.635927 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: {Name:mk22a67b2c0e90bc5dc67c34e330ee73fa799ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636119 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key ...
	I0630 14:18:51.636131 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.key: {Name:mkbf3398b6d7cd5371d9a47d76e04eca4caef4d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:51.636203 1558425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213
	I0630 14:18:51.636222 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I0630 14:18:52.292769 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 ...
	I0630 14:18:52.292809 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213: {Name:mk1402d3ac26fc5001a4011347c3552a378bda20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.292987 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 ...
	I0630 14:18:52.293001 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213: {Name:mkeaa6e21db5ae6cfb6b65c2ca90535340da5144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.293104 1558425 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt
	I0630 14:18:52.293196 1558425 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key.294cb213 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key
	I0630 14:18:52.293250 1558425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key
	I0630 14:18:52.293270 1558425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt with IP's: []
	I0630 14:18:52.419123 1558425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt ...
	I0630 14:18:52.419160 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt: {Name:mk3dd33047a5c3911a43a99bfac807aefa8e06f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419432 1558425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key ...
	I0630 14:18:52.419460 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key: {Name:mk0d0b95d0dc825fc1e604461553530ed22a222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:52.419680 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 14:18:52.419719 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 14:18:52.419744 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 14:18:52.419768 1558425 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 14:18:52.420585 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 14:18:52.463313 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 14:18:52.499004 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 14:18:52.526030 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 14:18:52.553220 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 14:18:52.581783 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 14:18:52.609656 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 14:18:52.639333 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0630 14:18:52.668789 1558425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 14:18:52.696673 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 14:18:52.718151 1558425 ssh_runner.go:195] Run: openssl version
	I0630 14:18:52.724602 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 14:18:52.737181 1558425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742169 1558425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.742231 1558425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:18:52.749342 1558425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 14:18:52.762744 1558425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 14:18:52.768406 1558425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 14:18:52.768474 1558425 kubeadm.go:392] StartCluster: {Name:addons-301682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-301682
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:18:52.768572 1558425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 14:18:52.768641 1558425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 14:18:52.812315 1558425 cri.go:89] found id: ""
	I0630 14:18:52.812437 1558425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 14:18:52.824357 1558425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 14:18:52.837485 1558425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 14:18:52.850688 1558425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 14:18:52.850718 1558425 kubeadm.go:157] found existing configuration files:
	
	I0630 14:18:52.850770 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 14:18:52.862272 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 14:18:52.862353 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 14:18:52.874603 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 14:18:52.885384 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 14:18:52.885470 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 14:18:52.897341 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.908726 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 14:18:52.908791 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 14:18:52.920093 1558425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 14:18:52.930423 1558425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 14:18:52.930535 1558425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 14:18:52.943582 1558425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 14:18:53.101493 1558425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 14:19:04.329808 1558425 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 14:19:04.329898 1558425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 14:19:04.330028 1558425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 14:19:04.330246 1558425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 14:19:04.330383 1558425 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 14:19:04.330478 1558425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 14:19:04.332630 1558425 out.go:235]   - Generating certificates and keys ...
	I0630 14:19:04.332731 1558425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 14:19:04.332810 1558425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 14:19:04.332905 1558425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 14:19:04.332972 1558425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 14:19:04.333024 1558425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 14:19:04.333069 1558425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 14:19:04.333119 1558425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 14:19:04.333250 1558425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333332 1558425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 14:19:04.333509 1558425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-301682 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0630 14:19:04.333623 1558425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 14:19:04.333739 1558425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 14:19:04.333816 1558425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 14:19:04.333868 1558425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 14:19:04.333909 1558425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 14:19:04.333955 1558425 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 14:19:04.334001 1558425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 14:19:04.334088 1558425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 14:19:04.334155 1558425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 14:19:04.334337 1558425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 14:19:04.334433 1558425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 14:19:04.336040 1558425 out.go:235]   - Booting up control plane ...
	I0630 14:19:04.336158 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 14:19:04.336225 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 14:19:04.336291 1558425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 14:19:04.336387 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 14:19:04.336461 1558425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 14:19:04.336498 1558425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 14:19:04.336705 1558425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 14:19:04.336826 1558425 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 14:19:04.336898 1558425 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001501258s
	I0630 14:19:04.336999 1558425 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 14:19:04.337079 1558425 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.227:8443/livez
	I0630 14:19:04.337160 1558425 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 14:19:04.337266 1558425 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 14:19:04.337343 1558425 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.200262885s
	I0630 14:19:04.337437 1558425 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.075387862s
	I0630 14:19:04.337541 1558425 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001441935s
	I0630 14:19:04.337665 1558425 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 14:19:04.337791 1558425 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 14:19:04.337843 1558425 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 14:19:04.338003 1558425 kubeadm.go:310] [mark-control-plane] Marking the node addons-301682 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 14:19:04.338066 1558425 kubeadm.go:310] [bootstrap-token] Using token: anrlv2.kitz2ouxhot5qn5d
	I0630 14:19:04.339966 1558425 out.go:235]   - Configuring RBAC rules ...
	I0630 14:19:04.340101 1558425 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 14:19:04.340226 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 14:19:04.340408 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 14:19:04.340552 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 14:19:04.340686 1558425 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 14:19:04.340806 1558425 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 14:19:04.340905 1558425 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 14:19:04.340944 1558425 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 14:19:04.340984 1558425 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 14:19:04.340990 1558425 kubeadm.go:310] 
	I0630 14:19:04.341040 1558425 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 14:19:04.341045 1558425 kubeadm.go:310] 
	I0630 14:19:04.341135 1558425 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 14:19:04.341142 1558425 kubeadm.go:310] 
	I0630 14:19:04.341172 1558425 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 14:19:04.341223 1558425 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 14:19:04.341270 1558425 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 14:19:04.341276 1558425 kubeadm.go:310] 
	I0630 14:19:04.341322 1558425 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 14:19:04.341328 1558425 kubeadm.go:310] 
	I0630 14:19:04.341449 1558425 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 14:19:04.341467 1558425 kubeadm.go:310] 
	I0630 14:19:04.341541 1558425 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 14:19:04.341643 1558425 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 14:19:04.341707 1558425 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 14:19:04.341712 1558425 kubeadm.go:310] 
	I0630 14:19:04.341781 1558425 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 14:19:04.341846 1558425 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 14:19:04.341851 1558425 kubeadm.go:310] 
	I0630 14:19:04.341924 1558425 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342019 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 14:19:04.342038 1558425 kubeadm.go:310] 	--control-plane 
	I0630 14:19:04.342043 1558425 kubeadm.go:310] 
	I0630 14:19:04.342140 1558425 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 14:19:04.342157 1558425 kubeadm.go:310] 
	I0630 14:19:04.342225 1558425 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token anrlv2.kitz2ouxhot5qn5d \
	I0630 14:19:04.342331 1558425 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 14:19:04.342344 1558425 cni.go:84] Creating CNI manager for ""
	I0630 14:19:04.342353 1558425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:19:04.344305 1558425 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 14:19:04.345962 1558425 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 14:19:04.358944 1558425 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 14:19:04.382550 1558425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 14:19:04.382682 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:04.382684 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-301682 minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=addons-301682 minikube.k8s.io/primary=true
	I0630 14:19:04.443025 1558425 ops.go:34] apiserver oom_adj: -16
	I0630 14:19:04.557859 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.058710 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:05.558655 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.058095 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:06.558920 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.058903 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:07.558782 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.058045 1558425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:19:08.134095 1558425 kubeadm.go:1105] duration metric: took 3.751500145s to wait for elevateKubeSystemPrivileges
	I0630 14:19:08.134146 1558425 kubeadm.go:394] duration metric: took 15.365674649s to StartCluster
	I0630 14:19:08.134169 1558425 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.134310 1558425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:19:08.134819 1558425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:19:08.135078 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 14:19:08.135086 1558425 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 14:19:08.135172 1558425 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0630 14:19:08.135355 1558425 addons.go:69] Setting yakd=true in profile "addons-301682"
	I0630 14:19:08.135370 1558425 addons.go:69] Setting default-storageclass=true in profile "addons-301682"
	I0630 14:19:08.135401 1558425 addons.go:69] Setting ingress=true in profile "addons-301682"
	I0630 14:19:08.135408 1558425 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-301682"
	I0630 14:19:08.135419 1558425 addons.go:69] Setting ingress-dns=true in profile "addons-301682"
	I0630 14:19:08.135425 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-301682"
	I0630 14:19:08.135433 1558425 addons.go:238] Setting addon ingress-dns=true in "addons-301682"
	I0630 14:19:08.135450 1558425 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135439 1558425 addons.go:69] Setting cloud-spanner=true in profile "addons-301682"
	I0630 14:19:08.135466 1558425 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-301682"
	I0630 14:19:08.135453 1558425 addons.go:69] Setting registry-creds=true in profile "addons-301682"
	I0630 14:19:08.135470 1558425 addons.go:238] Setting addon cloud-spanner=true in "addons-301682"
	I0630 14:19:08.135482 1558425 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-301682"
	I0630 14:19:08.135488 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135499 1558425 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-301682"
	I0630 14:19:08.135507 1558425 addons.go:238] Setting addon registry-creds=true in "addons-301682"
	I0630 14:19:08.135508 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135522 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135532 1558425 addons.go:69] Setting volcano=true in profile "addons-301682"
	I0630 14:19:08.135553 1558425 addons.go:238] Setting addon volcano=true in "addons-301682"
	I0630 14:19:08.135560 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135601 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.135968 1558425 addons.go:69] Setting storage-provisioner=true in profile "addons-301682"
	I0630 14:19:08.135968 1558425 addons.go:69] Setting volumesnapshots=true in profile "addons-301682"
	I0630 14:19:08.135383 1558425 addons.go:238] Setting addon yakd=true in "addons-301682"
	I0630 14:19:08.135985 1558425 addons.go:238] Setting addon storage-provisioner=true in "addons-301682"
	I0630 14:19:08.135986 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135992 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135999 1558425 addons.go:69] Setting metrics-server=true in profile "addons-301682"
	I0630 14:19:08.136001 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135468 1558425 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:08.136013 1558425 addons.go:238] Setting addon metrics-server=true in "addons-301682"
	I0630 14:19:08.136018 1558425 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-301682"
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136026 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136033 1558425 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-301682"
	I0630 14:19:08.136033 1558425 addons.go:69] Setting registry=true in profile "addons-301682"
	I0630 14:19:08.136037 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136042 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136046 1558425 addons.go:238] Setting addon registry=true in "addons-301682"
	I0630 14:19:08.136053 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136053 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136063 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136333 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136344 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135988 1558425 addons.go:238] Setting addon volumesnapshots=true in "addons-301682"
	I0630 14:19:08.136373 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136380 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135974 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.135392 1558425 addons.go:69] Setting gcp-auth=true in profile "addons-301682"
	I0630 14:19:08.136406 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135413 1558425 addons.go:238] Setting addon ingress=true in "addons-301682"
	I0630 14:19:08.136410 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136430 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136437 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136439 1558425 mustload.go:65] Loading cluster: addons-301682
	I0630 14:19:08.135985 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136376 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136021 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136019 1558425 addons.go:69] Setting inspektor-gadget=true in profile "addons-301682"
	I0630 14:19:08.136533 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136004 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136408 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136571 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136399 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136594 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136043 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136654 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136035 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.135386 1558425 config.go:182] Loaded profile config "addons-301682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:19:08.136802 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.136830 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.136538 1558425 addons.go:238] Setting addon inspektor-gadget=true in "addons-301682"
	I0630 14:19:08.136860 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.136968 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.137006 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.141678 1558425 out.go:177] * Verifying Kubernetes components...
	I0630 14:19:08.143558 1558425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:19:08.149915 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.149982 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.150069 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.150111 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.153357 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.153432 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.165614 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0630 14:19:08.165858 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0630 14:19:08.166745 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.166906 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.167573 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167595 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.167730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.167744 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.168231 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168297 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.168527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.168851 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.168901 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.173235 1558425 addons.go:238] Setting addon default-storageclass=true in "addons-301682"
	I0630 14:19:08.173294 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.173724 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.173785 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.184456 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0630 14:19:08.185663 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.186359 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.186383 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.186868 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.187481 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.187524 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.198676 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0630 14:19:08.199720 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0630 14:19:08.200624 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.201056 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0630 14:19:08.201384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.201425 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.201824 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.202320 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.202341 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.202767 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.203373 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.203425 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.203875 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.204017 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.204559 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.204608 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.204944 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.204958 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.205500 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.206106 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.206167 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.212484 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0630 14:19:08.213076 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.213762 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.213782 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.214717 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I0630 14:19:08.214882 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37359
	I0630 14:19:08.215450 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.215549 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.216208 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216234 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216395 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.216419 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.216498 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.216551 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0630 14:19:08.217141 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.217198 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.218026 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218078 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218644 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.218679 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.218897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0630 14:19:08.218965 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219098 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0630 14:19:08.219374 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.219416 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.219490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.219517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.219600 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.219645 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.220038 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220058 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.220197 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.220208 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.222722 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0630 14:19:08.222897 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0630 14:19:08.223028 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.223845 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.223892 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.224072 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0630 14:19:08.224388 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36347
	I0630 14:19:08.224623 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.225142 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.225164 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.225248 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I0630 14:19:08.225593 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226043 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.226641 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.226692 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.227826 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.228314 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.228351 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.228730 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.228753 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.228834 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.228874 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0630 14:19:08.229220 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.229470 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.229681 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.229725 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.230097 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.230128 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.240167 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.240974 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.241058 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0630 14:19:08.243477 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.243596 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0630 14:19:08.261647 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0630 14:19:08.261668 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I0630 14:19:08.261862 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0630 14:19:08.262201 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0630 14:19:08.261652 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I0630 14:19:08.261852 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0630 14:19:08.262971 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.263041 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263580 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263514 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263640 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263642 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263689 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263697 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263766 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.263767 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264204 1558425 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0630 14:19:08.264700 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264710 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.264910 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.264924 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265056 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265067 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265244 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265261 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265313 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265330 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265384 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265397 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265490 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265504 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265517 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265522 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265580 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265661 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265661 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:08.265674 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265689 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0630 14:19:08.265696 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.265706 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.265712 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.265940 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.265988 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266721 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266732 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266787 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266802 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266873 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266885 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266892 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266920 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.266927 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.266935 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.266948 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.266963 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267095 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267169 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267219 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.267412 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267464 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.267868 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.267912 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.268375 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.268443 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.268484 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.269549 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.269597 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.270926 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.272833 1558425 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0630 14:19:08.274128 1558425 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.274146 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0630 14:19:08.274171 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.274859 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275064 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275721 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.276192 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.275698 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.277235 1558425 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0630 14:19:08.277261 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 14:19:08.277735 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.277888 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0630 14:19:08.277911 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
	I0630 14:19:08.278583 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.278754 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.278813 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.278881 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0630 14:19:08.278897 1558425 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0630 14:19:08.278922 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279033 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.279041 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 14:19:08.279054 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279564 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.279577 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0630 14:19:08.279593 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.279642 1558425 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.35
	I0630 14:19:08.281429 1558425 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:08.281448 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0630 14:19:08.281468 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.281533 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.282713 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:08.283764 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284087 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284228 1558425 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:08.284248 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0630 14:19:08.284269 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.284461 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284503 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.284726 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.284883 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.284950 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.284965 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285137 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.285324 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.285515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.285599 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.285736 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286034 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.286041 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286069 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286207 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.286615 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.286628 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286660 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.286673 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.286850 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.286908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.287215 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287232 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.287998 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.287988 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288619 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.288647 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.288829 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.288982 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289082 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.289115 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.289387 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289495 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.289954 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.289983 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.290152 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290230 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.290347 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290431 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.290897 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.291154 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.292418 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.292454 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.292433 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.292721 1558425 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-301682"
	I0630 14:19:08.292738 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.292763 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:08.292887 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.293016 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.293150 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.293200 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.294549 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0630 14:19:08.296018 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0630 14:19:08.297203 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0630 14:19:08.298509 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0630 14:19:08.299741 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0630 14:19:08.301072 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0630 14:19:08.302287 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0630 14:19:08.303246 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0630 14:19:08.303926 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.304284 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0630 14:19:08.304575 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.304600 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.305069 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.305303 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.305513 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0630 14:19:08.305597 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0630 14:19:08.305646 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0630 14:19:08.308495 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I0630 14:19:08.308465 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0630 14:19:08.309009 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309265 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309301 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.309500 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.309544 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.309729 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.309915 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.310105 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.310445 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.310557 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.310962 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.310986 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312430 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.312542 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0630 14:19:08.312690 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.312715 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43567
	I0630 14:19:08.312896 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.312908 1558425 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:08.312914 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.312922 1558425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 14:19:08.312899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.312950 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.312967 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35789
	I0630 14:19:08.313116 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.313130 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.313608 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.313798 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.314003 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314075 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.314701 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.314761 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.314826 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.315163 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315447 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.315638 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.315743 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.315801 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.316217 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.316239 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.316441 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.317458 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.317480 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.317755 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.318404 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.318763 1558425 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0630 14:19:08.319446 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.319608 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.319686 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.319964 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.319978 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320265 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.320279 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:08.320350 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:08.320357 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:08.320810 1558425 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0630 14:19:08.320976 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0630 14:19:08.321001 1558425 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0630 14:19:08.321024 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.321215 1558425 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0630 14:19:08.322277 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0630 14:19:08.322294 1558425 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0630 14:19:08.322314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323097 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323112 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.323135 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:08.323167 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:08.323175 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:08.323273 1558425 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0630 14:19:08.323158 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.323505 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.323867 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0630 14:19:08.323883 1558425 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0630 14:19:08.323899 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.323920 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.323964 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0630 14:19:08.324118 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.324491 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.324603 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0630 14:19:08.324644 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.324757 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.325272 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.325293 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.327148 1558425 out.go:177]   - Using image docker.io/registry:3.0.0
	I0630 14:19:08.328448 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328463 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.328471 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0630 14:19:08.328485 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.328486 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0630 14:19:08.328506 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0630 14:19:08.328469 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.328527 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.328555 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329261 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.329271 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329296 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329298 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329306 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.329427 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329488 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329522 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.329831 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.329844 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329873 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.329893 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.329908 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.329932 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.329965 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.330048 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330100 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330127 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.330233 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.330571 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.330635 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.330797 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.331366 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:08.331539 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:08.333151 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.333196 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.333924 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.333946 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.334093 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.334267 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.334413 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.334534 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.335093 1558425 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0630 14:19:08.336351 1558425 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:08.336368 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0630 14:19:08.336384 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.339580 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340100 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.340140 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.340314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.340523 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.340672 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.340813 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.350360 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0630 14:19:08.350984 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:08.351790 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:08.351819 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:08.352186 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:08.352420 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:08.354260 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:08.356054 1558425 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0630 14:19:08.357435 1558425 out.go:177]   - Using image docker.io/busybox:stable
	I0630 14:19:08.358781 1558425 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:08.358803 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0630 14:19:08.358828 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:08.362552 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.362966 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:08.362990 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:08.363100 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:08.363314 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:08.363506 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:08.363630 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:08.439689 1558425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 14:19:08.476644 1558425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:19:08.843915 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:19:08.877498 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:19:08.886078 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0630 14:19:08.886117 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0630 14:19:08.911521 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:19:08.934599 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0630 14:19:09.020016 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:19:09.040482 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0630 14:19:09.040511 1558425 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0630 14:19:09.043569 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:19:09.148704 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:19:09.202814 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0630 14:19:09.202869 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0630 14:19:09.278194 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0630 14:19:09.278231 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0630 14:19:09.295189 1558425 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:09.295224 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0630 14:19:09.299217 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0630 14:19:09.299263 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0630 14:19:09.332360 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0630 14:19:09.332403 1558425 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0630 14:19:09.352402 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0630 14:19:09.352438 1558425 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0630 14:19:09.405398 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 14:19:09.451227 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:19:09.755506 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0630 14:19:09.755546 1558425 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0630 14:19:09.891227 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0630 14:19:09.891271 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0630 14:19:09.920129 1558425 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:09.920177 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0630 14:19:09.934092 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0630 14:19:09.934135 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0630 14:19:09.987104 1558425 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:09.987162 1558425 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0630 14:19:10.065936 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:19:10.412611 1558425 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0630 14:19:10.412651 1558425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0630 14:19:10.472848 1558425 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:10.472884 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0630 14:19:10.534908 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:19:10.637801 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0630 14:19:10.637839 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0630 14:19:10.658361 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:19:10.787257 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0630 14:19:10.787289 1558425 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0630 14:19:10.989751 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:19:11.047653 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0630 14:19:11.047693 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0630 14:19:11.196682 1558425 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.196715 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0630 14:19:11.291758 1558425 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.852019855s)
	I0630 14:19:11.291806 1558425 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 14:19:11.291816 1558425 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.815128335s)
	I0630 14:19:11.292560 1558425 node_ready.go:35] waiting up to 6m0s for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314454 1558425 node_ready.go:49] node "addons-301682" is "Ready"
	I0630 14:19:11.314498 1558425 node_ready.go:38] duration metric: took 21.89293ms for node "addons-301682" to be "Ready" ...
	I0630 14:19:11.314515 1558425 api_server.go:52] waiting for apiserver process to appear ...
	I0630 14:19:11.314579 1558425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:19:11.614705 1558425 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0630 14:19:11.614735 1558425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0630 14:19:11.736486 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0630 14:19:11.736514 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0630 14:19:11.778191 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:11.869515 1558425 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-301682" context rescaled to 1 replicas
	I0630 14:19:12.215816 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0630 14:19:12.215858 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0630 14:19:12.875440 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0630 14:19:12.875469 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0630 14:19:13.113763 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0630 14:19:13.113791 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0630 14:19:13.233897 1558425 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.233936 1558425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0630 14:19:13.547481 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:19:13.908710 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.064741353s)
	I0630 14:19:13.908777 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.031226379s)
	I0630 14:19:13.908828 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908848 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908846 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.997298204s)
	I0630 14:19:13.908863 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908877 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908789 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.908930 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.908964 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.974334377s)
	I0630 14:19:13.908996 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909007 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909009 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.888949022s)
	I0630 14:19:13.909048 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909061 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.909699 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.909716 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.909725 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.909733 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910126 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910140 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910150 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910156 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910411 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910438 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910445 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910452 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910457 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.910696 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.910727 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.910744 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.910751 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.910757 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.911970 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912059 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912080 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912106 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:13.912127 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:13.912244 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912321 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:13.912362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912376 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912399 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912409 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912423 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912436 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.912476 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.912487 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913952 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:13.913972 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:15.489658 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0630 14:19:15.489718 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:15.493165 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493587 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:15.493623 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:15.493976 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:15.494223 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:15.494515 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:15.494707 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:15.765543 1558425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0630 14:19:15.978232 1558425 addons.go:238] Setting addon gcp-auth=true in "addons-301682"
	I0630 14:19:15.978326 1558425 host.go:66] Checking if "addons-301682" exists ...
	I0630 14:19:15.978844 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:15.978897 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:15.997982 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0630 14:19:15.998461 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:15.999138 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:15.999166 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:15.999618 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.000381 1558425 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:19:16.000428 1558425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:19:16.018425 1558425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0630 14:19:16.018996 1558425 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:19:16.019552 1558425 main.go:141] libmachine: Using API Version  1
	I0630 14:19:16.019578 1558425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:19:16.020118 1558425 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:19:16.020378 1558425 main.go:141] libmachine: (addons-301682) Calling .GetState
	I0630 14:19:16.022570 1558425 main.go:141] libmachine: (addons-301682) Calling .DriverName
	I0630 14:19:16.022848 1558425 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0630 14:19:16.022880 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHHostname
	I0630 14:19:16.026200 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027053 1558425 main.go:141] libmachine: (addons-301682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:16:36", ip: ""} in network mk-addons-301682: {Iface:virbr1 ExpiryTime:2025-06-30 15:18:34 +0000 UTC Type:0 Mac:52:54:00:83:16:36 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-301682 Clientid:01:52:54:00:83:16:36}
	I0630 14:19:16.027107 1558425 main.go:141] libmachine: (addons-301682) DBG | domain addons-301682 has defined IP address 192.168.39.227 and MAC address 52:54:00:83:16:36 in network mk-addons-301682
	I0630 14:19:16.027360 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHPort
	I0630 14:19:16.027605 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHKeyPath
	I0630 14:19:16.027797 1558425 main.go:141] libmachine: (addons-301682) Calling .GetSSHUsername
	I0630 14:19:16.027986 1558425 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/addons-301682/id_rsa Username:docker}
	I0630 14:19:16.771513 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.727888765s)
	I0630 14:19:16.771570 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.622822849s)
	I0630 14:19:16.771591 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771607 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771630 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771647 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771647 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.36619116s)
	I0630 14:19:16.771673 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771688 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771767 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.320503654s)
	I0630 14:19:16.771831 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771842 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.705862816s)
	I0630 14:19:16.771865 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771873 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771904 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.236967233s)
	I0630 14:19:16.771940 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.771966 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.771989 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.113597897s)
	I0630 14:19:16.772016 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772026 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772112 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.782331879s)
	I0630 14:19:16.772132 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772140 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772199 1558425 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.457605469s)
	I0630 14:19:16.772216 1558425 api_server.go:72] duration metric: took 8.637102064s to wait for apiserver process to appear ...
	I0630 14:19:16.772223 1558425 api_server.go:88] waiting for apiserver healthz status ...
	I0630 14:19:16.772245 1558425 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0630 14:19:16.771847 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772472 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772489 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772500 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772508 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772567 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772660 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772670 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772678 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772685 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.772744 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.772768 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.772774 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.772782 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.772789 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773055 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773073 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773096 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773119 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773125 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773131 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773137 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773371 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773380 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773388 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773398 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773540 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773583 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773592 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773602 1558425 addons.go:479] Verifying addon registry=true in "addons-301682"
	I0630 14:19:16.773651 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.773661 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.773668 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.773675 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.773927 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.773965 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774128 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774333 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774357 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774383 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774389 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774656 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774694 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774695 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.774703 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774710 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774722 1558425 addons.go:479] Verifying addon ingress=true in "addons-301682"
	I0630 14:19:16.774767 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.774700 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.774931 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.774943 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.774797 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775055 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.775066 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.775086 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.775936 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.775954 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776331 1558425 out.go:177] * Verifying ingress addon...
	I0630 14:19:16.776373 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776407 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776413 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776457 1558425 out.go:177] * Verifying registry addon...
	I0630 14:19:16.776565 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:16.776586 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776591 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.776599 1558425 addons.go:479] Verifying addon metrics-server=true in "addons-301682"
	I0630 14:19:16.776668 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.776681 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.778466 1558425 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0630 14:19:16.779098 1558425 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-301682 service yakd-dashboard -n yakd-dashboard
	
	I0630 14:19:16.779694 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0630 14:19:16.788556 1558425 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0630 14:19:16.789906 1558425 api_server.go:141] control plane version: v1.33.2
	I0630 14:19:16.789941 1558425 api_server.go:131] duration metric: took 17.709666ms to wait for apiserver health ...
	I0630 14:19:16.789955 1558425 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 14:19:16.796628 1558425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0630 14:19:16.796662 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:16.796921 1558425 system_pods.go:59] 15 kube-system pods found
	I0630 14:19:16.796954 1558425 system_pods.go:61] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.796961 1558425 system_pods.go:61] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.796972 1558425 system_pods.go:61] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.796976 1558425 system_pods.go:61] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.796984 1558425 system_pods.go:61] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.796987 1558425 system_pods.go:61] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.796992 1558425 system_pods.go:61] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.796997 1558425 system_pods.go:61] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.797004 1558425 system_pods.go:61] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.797011 1558425 system_pods.go:61] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.797018 1558425 system_pods.go:61] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.797028 1558425 system_pods.go:61] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.797035 1558425 system_pods.go:61] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.797042 1558425 system_pods.go:61] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.797049 1558425 system_pods.go:61] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.797057 1558425 system_pods.go:74] duration metric: took 7.094316ms to wait for pod list to return data ...
	I0630 14:19:16.797068 1558425 default_sa.go:34] waiting for default service account to be created ...
	I0630 14:19:16.798790 1558425 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0630 14:19:16.798807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:16.809885 1558425 default_sa.go:45] found service account: "default"
	I0630 14:19:16.809914 1558425 default_sa.go:55] duration metric: took 12.83884ms for default service account to be created ...
	I0630 14:19:16.809925 1558425 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 14:19:16.818226 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.818251 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.818525 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.818587 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:19:16.818715 1558425 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0630 14:19:16.836146 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:16.836179 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:16.836489 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:16.836539 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:16.898260 1558425 system_pods.go:86] 15 kube-system pods found
	I0630 14:19:16.898321 1558425 system_pods.go:89] "amd-gpu-device-plugin-g5z6w" [df18eec1-4314-4045-804d-b82424676c71] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:19:16.898334 1558425 system_pods.go:89] "coredns-674b8bbfcf-gcxhf" [89022f36-ce77-49a7-a13b-77ae0fd99bbc] Running
	I0630 14:19:16.898347 1558425 system_pods.go:89] "coredns-674b8bbfcf-gmzj8" [552e5313-660d-46ce-b775-4e8955892501] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 14:19:16.898355 1558425 system_pods.go:89] "etcd-addons-301682" [a24af94a-875d-40dd-92af-74d3a4e214e3] Running
	I0630 14:19:16.898364 1558425 system_pods.go:89] "kube-apiserver-addons-301682" [1ced705a-0d41-412a-b40c-512ebd9fe2e9] Running
	I0630 14:19:16.898371 1558425 system_pods.go:89] "kube-controller-manager-addons-301682" [fecf84e5-d547-4d13-908f-11b6cb46ab95] Running
	I0630 14:19:16.898380 1558425 system_pods.go:89] "kube-ingress-dns-minikube" [688d2765-af4d-40da-a2a8-a18c0936a24d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:19:16.898390 1558425 system_pods.go:89] "kube-proxy-cm28f" [a4192237-41bc-4541-b487-a9003f16fc0d] Running
	I0630 14:19:16.898398 1558425 system_pods.go:89] "kube-scheduler-addons-301682" [f05eb587-4342-4968-9e59-91019671cc43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 14:19:16.898406 1558425 system_pods.go:89] "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:19:16.898431 1558425 system_pods.go:89] "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:19:16.898443 1558425 system_pods.go:89] "registry-694bd45846-x8cnn" [7abfe955-5483-43f9-ad73-92df930e353e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:19:16.898451 1558425 system_pods.go:89] "registry-creds-6b69cdcdd5-n9cld" [042a3494-2e07-4ce8-b9f8-7d37cf08138d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:19:16.898461 1558425 system_pods.go:89] "registry-proxy-2dgr9" [4b452b4b-9d13-4540-ab29-ec9dc9211e75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:19:16.898471 1558425 system_pods.go:89] "storage-provisioner" [93cf7ffa-1e9d-4045-ba8c-26713b592bee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 14:19:16.898485 1558425 system_pods.go:126] duration metric: took 88.551205ms to wait for k8s-apps to be running ...
	I0630 14:19:16.898500 1558425 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 14:19:16.898565 1558425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:19:17.317126 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:17.374411 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.596164186s)
	W0630 14:19:17.374478 1558425 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.374547 1558425 retry.go:31] will retry after 162.408109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:19:17.425522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.537869 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:19:17.785630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:17.785674 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.306660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.306889 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.552015 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.004467325s)
	I0630 14:19:18.552194 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552225 1558425 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.529350239s)
	I0630 14:19:18.552276 1558425 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.653693225s)
	I0630 14:19:18.552302 1558425 system_svc.go:56] duration metric: took 1.653798008s WaitForService to wait for kubelet
	I0630 14:19:18.552318 1558425 kubeadm.go:578] duration metric: took 10.417201876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:19:18.552241 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552348 1558425 node_conditions.go:102] verifying NodePressure condition ...
	I0630 14:19:18.552645 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552664 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552675 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:18.552686 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:18.552919 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:18.552936 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:18.552948 1558425 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-301682"
	I0630 14:19:18.554300 1558425 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:19:18.555232 1558425 out.go:177] * Verifying csi-hostpath-driver addon...
	I0630 14:19:18.556214 1558425 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0630 14:19:18.556827 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0630 14:19:18.557433 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0630 14:19:18.557459 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0630 14:19:18.596354 1558425 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 14:19:18.596393 1558425 node_conditions.go:123] node cpu capacity is 2
	I0630 14:19:18.596408 1558425 node_conditions.go:105] duration metric: took 44.050461ms to run NodePressure ...
	I0630 14:19:18.596422 1558425 start.go:241] waiting for startup goroutines ...
	I0630 14:19:18.603104 1558425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0630 14:19:18.603135 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:18.637868 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0630 14:19:18.637900 1558425 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0630 14:19:18.748099 1558425 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:18.748163 1558425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0630 14:19:18.792604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:18.792626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:18.843691 1558425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:19:19.062533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.282741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.282766 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:19.563538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:19.721889 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.183953285s)
	I0630 14:19:19.721971 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.721990 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.722705 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:19.722805 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.722841 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.722861 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:19.722870 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:19.723362 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:19.723392 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:19.784854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:19.785087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.084451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.338994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.339229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:20.491192 1558425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.647431709s)
	I0630 14:19:20.491275 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491294 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491664 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.491685 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.491696 1558425 main.go:141] libmachine: Making call to close driver server
	I0630 14:19:20.491704 1558425 main.go:141] libmachine: (addons-301682) Calling .Close
	I0630 14:19:20.491987 1558425 main.go:141] libmachine: (addons-301682) DBG | Closing plugin on server side
	I0630 14:19:20.492026 1558425 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:19:20.492052 1558425 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:19:20.493344 1558425 addons.go:479] Verifying addon gcp-auth=true in "addons-301682"
	I0630 14:19:20.495394 1558425 out.go:177] * Verifying gcp-auth addon...
	I0630 14:19:20.497751 1558425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0630 14:19:20.544088 1558425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0630 14:19:20.544122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:20.616283 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:20.790338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:20.794229 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.001876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.103156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.286215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:21.287404 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.501971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:21.603568 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:21.782426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:21.783543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.002607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.061769 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.283406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.283458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:22.501544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:22.563768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:22.782065 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:22.785105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.001506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.062272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.283151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:23.283566 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.501628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:23.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:23.782561 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:23.783298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.001778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.062179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.351397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.351533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:24.502302 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:24.560819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:24.783532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:24.783606 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.000665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.066861 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.283070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:25.283328 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.501446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:25.566260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:25.782894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:25.783547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.005011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.064792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.283606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:26.502271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:26.561300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:26.782991 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:26.783050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.001311 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.061332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.282733 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:27.284226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.501814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:27.562410 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:27.783241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:27.783497 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.002164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.060264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.282980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:28.283180 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.500523 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:28.560485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:28.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:28.783545 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.000985 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.061185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.282663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:29.282792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.500648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:29.560782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:29.782042 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:29.783619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.001946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.060881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.282133 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:30.283049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.500975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:30.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:30.782609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:30.782603 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.001534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.060703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.282157 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.283847 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:31.500628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:31.560669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:31.782294 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:31.782820 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.001862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.061034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.281959 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:32.282969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.501719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:32.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:32.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:32.783890 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.001382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.060618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:33.289955 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.501909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:33.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:33.782531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:33.784168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.003605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.060279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.282397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:34.282808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.613798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:34.614652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:34.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:34.782800 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.000818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.060998 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.282231 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.283653 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:35.509348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:35.560724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:35.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:35.783017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.001083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.060369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.702785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:36.703123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:36.703555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.706970 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:36.804241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:36.804456 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.001688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.061214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.282908 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.284915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:37.500826 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:37.560092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:37.782407 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:37.784106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.061107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:38.282046 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:38.283180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:38.501297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:38.563927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.189422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.189531 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.190495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.191248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.282505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.282920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:39.500781 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:39.560685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:39.781821 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:39.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.001299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.071624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.283182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.283221 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:40.501026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:40.560313 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:40.783565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:40.783591 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.002088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.079056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.283365 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.283894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:41.501095 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:41.565670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:41.781792 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:41.782774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.000619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.060899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:42.283068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.501445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:42.560361 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:42.783776 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:42.783964 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.001605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.060231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.284417 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:43.284499 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.501005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:43.560455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:43.782135 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:43.783795 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.001747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.061008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:44.281520 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:44.282610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:44.501859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:44.561166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.190446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.291473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.291489 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.291572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.293575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:45.501432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:45.560935 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:45.782091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:45.783835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.001576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.060855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.281632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.282695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:46.500503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:46.560648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:46.781708 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:46.783401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.001349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.060664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.288991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:47.289151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.501378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:47.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:47.783679 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:47.783934 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.063640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.283018 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.288264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:48.501060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:48.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:48.782532 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:48.783014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.060136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.284470 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:49.284616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.501493 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:49.560740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:49.782176 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:49.783205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.001724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.061175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.285556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:50.285655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.501435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:50.561083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:50.782238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:50.783288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.001421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.060971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.312768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:51.312922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.501057 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:51.560396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:51.782795 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:51.783117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.001134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.060267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.283193 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:52.283291 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.502021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:52.560380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:52.783076 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:52.784387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.001939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.061183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.281990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:53.283259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.502028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:53.560640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:53.782501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:53.783649 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.001220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.061666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.282039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:54.283121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.501316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:54.560447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:54.783504 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:54.783727 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.000517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.061087 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.282418 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.283456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:55.502008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:55.560325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:55.783555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:55.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.001431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.060991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.282249 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.283767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:56.501025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:56.560838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:56.782271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:56.782994 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.001527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.061065 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.283743 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.283956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:57.502182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:57.560567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:57.783238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:57.783763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.001345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.060462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.282685 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.282967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:58.501929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:58.561387 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:58.782616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:58.783122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.001904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.282072 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:19:59.282798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.501590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:19:59.561148 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:19:59.783157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:19:59.783870 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.000897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.061506 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.281697 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.282838 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:00.500884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:00.561577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:00.781570 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:00.783296 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.002271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.061072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.282434 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:01.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.501896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:01.561570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:01.782586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:01.782842 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.000727 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.282765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:02.282809 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.501507 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:02.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:02.782628 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:02.782871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.001603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.060848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.282653 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.283752 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:03.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:03.560629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:03.781639 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:03.782897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.001586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.283389 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.283730 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:04.500996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:04.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:04.783260 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.001555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.060738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.282896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.282927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:05.501053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:05.602159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:05.783741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:05.783966 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.001070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.060590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.282798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:06.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.500761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:06.560993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:06.784950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:06.785237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.001699 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.061334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.282883 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:07.283203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.502196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:07.561691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:07.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:07.783652 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.001648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.061773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.281568 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:08.283567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.502500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:08.561076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:08.782892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:08.783238 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.001899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.282681 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.283009 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:09.501744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:09.561385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:09.782769 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:09.783806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.000774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.282325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:10.283050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.501741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:10.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:10.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:10.783200 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.001016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.060512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.283758 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.284197 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:11.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:11.560441 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:11.782907 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:11.783577 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.001888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.060849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.282280 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:12.282418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.501807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:12.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:12.783005 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.002304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.061129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.283315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:13.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.501972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:13.561333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:13.783487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:13.783655 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.001242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.282022 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.283080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:14.501717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:14.560630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:14.781894 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:14.782368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.282562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.282888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:15.500950 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:15.560206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:15.782473 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:15.783016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.001340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.283196 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:16.501224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:16.560432 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:16.783077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:16.783121 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.281574 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.282511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:17.502499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:17.560896 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:17.781956 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:17.782624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.000392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.060943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.283184 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.283879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:18.501537 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:18.562926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:18.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:18.782451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.001149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.061264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.282752 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:19.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.502206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:19.560605 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:19.782509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:19.782554 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.002254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.282485 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:20.500924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:20.561822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:20.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:20.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.002205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.060747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.282021 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.282563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:21.505254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:21.561819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:21.782724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:21.782735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.000999 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.060710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.281865 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:22.282163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.501978 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:22.562175 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:22.782908 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:22.782992 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.001604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.061218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.282416 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.282830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:23.501539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:23.562050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:23.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:23.784161 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.001477 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.060126 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.282030 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.283809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:24.501806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:24.602840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:24.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:24.782907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.000878 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.061123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.282013 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.283761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:25.504764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:25.606761 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:25.782107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:25.782874 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.000621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.061556 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.285974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:26.286315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.502580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:26.561105 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:26.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:26.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.000735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.061233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.282071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:27.285152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.501573 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:27.561120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:27.782732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:27.782840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.000630 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.060922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:28.283472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.501080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:28.560454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:28.782967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:28.782976 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.237835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.237889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.336150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.336331 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:29.501907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:29.602786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:29.782929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:29.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.001264 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.060690 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.281762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:30.282475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.501884 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:30.572349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:30.783064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:30.783109 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.002526 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.062561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:31.283179 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.501139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:31.560586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:31.784336 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:31.784346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.001433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.290054 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:32.291744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.500808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:32.568201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:32.782533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:32.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.001710 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.282933 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.284426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:33.501589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:33.561081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:33.784027 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:33.784261 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.002823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.063430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.284309 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.285663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:34.500807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:34.561036 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:34.784211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:34.784213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.001454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.061492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.281525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.282364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:35.501644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:35.560943 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:35.783199 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:35.783563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.002111 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.060708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.281535 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:36.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.861446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:36.861593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:36.965825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:36.966272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.061370 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.283380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:37.283513 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.501468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:37.561192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:37.785517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:37.786292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.061069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.284714 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:38.284846 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.502574 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:38.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:38.783069 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.001928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.061873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.282406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.283481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:39.503169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:39.561098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:39.782813 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:39.783641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.002181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.060266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.282891 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.283849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:40.500843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:40.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:40.782926 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:40.783029 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.001321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.060760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.281798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.284037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:41.502572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:41.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:41.782285 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:41.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.001897 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.283725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.283888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:42.501480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:42.561461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:42.782548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:42.782713 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.093940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.097843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.282818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.282819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:43.501106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:43.560130 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:43.782663 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:43.783944 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.001422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.060503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.281922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:44.283136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.501600 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:44.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:44.782904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:44.782953 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.001192 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.060597 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.283117 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:45.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.501174 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:45.560528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:45.786937 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:45.787508 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.003194 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.061532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.283078 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:46.283645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.501606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:46.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:46.783542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:46.783577 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.001484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.061088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.282533 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:47.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.501685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:47.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:47.783792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:47.783801 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.000652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.061347 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.282791 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.283149 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:48.501196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:48.560571 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:48.782724 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:48.783665 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.001578 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.060917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.283443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.283529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:49.501548 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:49.560886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:49.782606 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:49.782806 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.001040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.060499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.282867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.283070 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:50.501307 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:50.560388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:50.782746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:50.782790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.000827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.061599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.281741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.282303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:51.501882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:51.561159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:51.782745 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:51.784064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.001127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.060734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.281924 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.282442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:52.501618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:52.560955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:52.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:52.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.001976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.060014 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.283833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:53.283868 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.501946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:53.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:53.787788 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:53.788281 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.001841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.282587 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.282894 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:54.501076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:54.560738 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:54.783982 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:54.784379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.001546 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.061794 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.282534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:55.283165 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.501579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:55.560818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:55.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:55.782537 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.001725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.282248 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.283345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:56.501508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:56.560858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:56.781927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:56.783218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.001706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.061118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.283582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.283762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:57.501038 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:57.560439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:57.783590 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:57.783720 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.001746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.061827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.282480 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.282960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:58.501434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:58.561028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:58.781998 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:58.782879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.001764 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.061200 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.282609 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:20:59.282747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.501377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:20:59.560960 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:20:59.785243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:20:59.785330 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.001691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.061010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:00.283580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:00.561741 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:00.784015 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:00.784091 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.060981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.282859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:01.283036 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.501809 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:01.561922 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:01.782501 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:01.783709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.002244 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.061572 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.284366 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:02.501516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:02.562167 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:02.782718 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:02.783603 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.002195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.060569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.283492 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:03.501693 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:03.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:03.783852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:03.784006 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.000924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.061226 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.282297 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:04.282987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.501089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:04.560458 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:04.783051 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:04.783361 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.001357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.060980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.282432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.284945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:05.501078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:05.560392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:05.782556 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:05.782745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.001356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.060485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.282979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:06.500697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:06.561446 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:06.783120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:06.783258 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.001429 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.060755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.281892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:07.282422 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.501870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:07.561285 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:07.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:07.783869 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.001179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.061434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.282620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:08.282643 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.501890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:08.561334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:08.782409 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:08.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.001428 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.060624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.283843 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:09.500869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:09.561327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:09.786343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:09.786990 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.001363 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.061669 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.281724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:10.283241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.501499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:10.560382 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:10.783379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:10.783703 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.006867 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.061528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.282068 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.284097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:11.501425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:11.561482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:11.781830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:11.782386 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.003000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.061220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.283490 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:12.283632 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.502107 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:12.560563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:12.786245 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:12.787717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.002660 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.061638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.282127 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.283171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:13.501269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:13.560543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:13.783150 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:13.783156 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.001885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.061206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.283314 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.283499 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:14.505208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:14.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:14.782762 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:14.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.003346 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.060844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.282760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.284010 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:15.501266 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:15.560665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:15.781811 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:15.782474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.263325 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.263338 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:16.283738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.502117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:16.604450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:16.783760 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:16.783855 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.005983 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.105360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.282882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:17.500988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:17.560342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:17.782772 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:17.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.007857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.061140 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:18.283796 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.501209 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:18.560948 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:18.783319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:18.783461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.001371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.061031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.282807 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:19.283969 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.501517 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:19.561032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:19.782932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:19.783012 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.005480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.060901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.283412 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:20.502027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:20.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:20.782626 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:20.783395 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.001871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.061472 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.283060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:21.283210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:21.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:21.782741 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:21.783745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.001089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.060638 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.283014 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:22.501633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:22.560933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:22.782511 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:22.783627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.001249 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.060586 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.281968 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.282925 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:23.501824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:23.561702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:23.781838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:23.782821 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.000909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.061364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.282635 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:24.282833 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.500870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:24.561501 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:24.783353 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:24.783411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.001919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.060593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:25.501682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:25.560920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:25.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:25.782607 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.001990 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.062631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.281975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:26.283634 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.502337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:26.561388 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:26.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:26.783873 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.000786 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.061090 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.282519 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.283219 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:27.502098 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:27.560684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:27.782103 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:27.782356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.001961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.061081 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.283082 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:28.283091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.502080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:28.560369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:28.782819 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:28.782888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.001300 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.060528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.282927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:29.500881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:29.561931 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:29.782352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:29.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.001314 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.061754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:30.283911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.501691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:30.561708 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:30.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:30.783505 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.018759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.118123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.283780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.283813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:31.500732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:31.561257 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:31.782789 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:31.783857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.000941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.061352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.283225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:32.283376 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.502377 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:32.560813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:32.782071 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:32.782893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.001627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.061719 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.282356 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.282853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:33.501995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:33.560218 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:33.783100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:33.783628 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.061301 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.282792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:34.283319 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.502265 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:34.603312 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:34.783237 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:34.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.001558 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.282165 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.283085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:35.501433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:35.560951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:35.782571 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:35.783567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.001993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.060500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.282630 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:36.282912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.501547 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:36.561085 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:36.783668 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:36.783838 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.001644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.061735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.282616 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:37.283047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.501624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:37.562291 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:37.783863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:37.784060 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.001210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.060997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.283100 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:38.283242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.501949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:38.561400 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:38.783522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:38.783562 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.001632 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.061775 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.283431 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:39.283517 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.502108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:39.561075 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:39.782288 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:39.783100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.001536 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.061613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.282272 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.282780 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:40.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:40.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:40.782057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:40.783645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.002564 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.062621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.282271 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:41.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.501391 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:41.562411 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:41.783324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:41.783579 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.002705 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.061893 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.282583 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.283671 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:42.502733 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:42.562940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:42.782853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:42.783073 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.001824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.062102 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.282830 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:43.283751 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.501119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:43.560492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:43.784115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:43.784145 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.001522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.061345 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.282831 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.283549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:44.503997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:44.607178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:44.782832 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:44.783717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.002427 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.061729 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.282878 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:45.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.501997 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:45.561163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:45.783552 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:45.783659 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.001682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.062807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.282597 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:46.283939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.503275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:46.561513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:46.784613 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:46.784911 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.061725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.283169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:47.283405 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.501322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:47.561186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:47.782927 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:47.784021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.001774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.282175 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.283210 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:48.502097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:48.561677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:48.782622 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:48.783039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.001787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.071403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.282882 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.283702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:49.501062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:49.560808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:49.781892 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:49.782731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.001262 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.060694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.282041 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:50.283114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.501527 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:50.561365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:50.786406 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:50.786567 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.001808 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.061553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.282657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:51.283296 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.501742 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:51.561178 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:51.782922 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:51.783680 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.061514 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.282067 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.282621 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:52.502198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:52.561158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:52.782564 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:52.782792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.001035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.060667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.281989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.283220 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:53.501930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:53.560987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:53.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:53.783173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.004903 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.061068 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.281852 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.282368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:54.501595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:54.561905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:54.782333 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:54.783021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.001532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.060924 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.281744 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.282438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:55.501581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:55.561843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:55.783311 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:55.784241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.001655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.061418 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.282846 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:56.283057 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.501645 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:56.562026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:56.782767 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:56.783836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.000993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.061640 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.282555 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:57.284099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.501478 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:57.561337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:57.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:57.783107 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.001026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.061636 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.283771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:58.284039 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.501701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:58.564159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:58.782721 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:58.783561 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.001195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.062667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.286778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.287064 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:21:59.501183 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:21:59.560532 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:21:59.783236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:21:59.783406 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.001562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.062563 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.283855 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.284134 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:00.501865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:00.564486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:00.782887 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:00.782984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.001528 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.061955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.283003 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.283746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:01.501317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:01.560704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:01.782191 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:01.783094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.001320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.061973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.283076 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.283282 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:02.501799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:02.561666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:02.783208 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:02.783342 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.004810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.063284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.283432 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.283755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:03.501473 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:03.560862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:03.782327 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:03.783798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.001354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.060898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.283327 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:04.283635 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.501503 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:04.560912 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:04.782536 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:04.783678 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.001055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.061771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.282390 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.284013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:05.501292 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:05.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:05.782798 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:05.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.001516 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.061337 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.282754 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.283371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:06.502565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:06.562077 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:06.783138 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:06.783697 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.000859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.062329 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.282379 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.282968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:07.501169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:07.560984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:07.782268 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:07.784049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.001494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.061308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.283724 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.284185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:08.502230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:08.560967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:08.783790 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:08.783900 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.001053 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.060828 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.283284 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.283806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:09.501109 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:09.560617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:09.782234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:09.783349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.001664 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.061833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.283401 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:10.283402 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.501704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:10.560961 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:10.783469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:10.783522 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.001757 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.061124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.283792 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:11.283989 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.501103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:11.560840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:11.782033 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:11.783604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.003374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.060433 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.282976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.283110 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:12.501047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:12.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:12.783921 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:12.784167 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.002696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.063144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.282766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:13.282879 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.501555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:13.561637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:13.781893 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:13.782616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.001004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.061103 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.283205 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.283446 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:14.501550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:14.562143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:14.783957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:14.784112 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.001423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.062033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.282424 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:15.501071 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:15.560348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:15.782780 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:15.783648 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.060889 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.282525 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:16.283260 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.501360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:16.560258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:16.783827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:16.783875 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.001565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.060813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.283097 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:17.501048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:17.560778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:17.781850 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:17.783463 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.002176 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.060602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.282443 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:18.501844 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:18.560670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:18.783600 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:18.783637 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.002695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.061454 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.282337 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:19.284196 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.501898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:19.566207 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:19.783150 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:19.783388 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.001915 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.063129 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.284273 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.285468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:20.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:20.560957 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:20.785008 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:20.785055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.001554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.061007 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.290166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:21.290315 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.504702 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:21.607046 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:21.782303 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:21.783112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.001610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.061225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.282696 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:22.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.501584 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:22.562703 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:22.782599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:22.783389 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.002163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.283818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.283940 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:23.501359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:23.561687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:23.781738 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:23.783834 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.001106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.060840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.283144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.283159 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:24.501879 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:24.561177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:24.784299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:24.784387 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.001461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.060909 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.282763 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.283372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:25.501554 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:25.561056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:25.782472 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:25.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.002067 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.060538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.282323 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:26.284932 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.501783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:26.561217 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:26.786385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:26.786624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.002328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.060923 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.282259 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.283369 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:27.502704 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:27.561567 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:27.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:27.783609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.001238 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.061117 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.283592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:28.283779 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.503754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:28.561835 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:28.783295 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:28.783426 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.001650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.061565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.284407 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.284751 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:29.501482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:29.561448 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:29.783602 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:29.783747 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.000612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.061762 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.282244 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:30.282945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.501114 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:30.561086 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:30.783309 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:30.783420 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.001952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.060101 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.282326 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.284221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:31.501777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:31.561372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:31.783156 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:31.783322 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.002694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.061381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.282764 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:32.284529 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.505575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:32.566298 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:32.784512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:32.784864 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.001675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.060993 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.282234 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:33.283872 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.501278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:33.560542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:33.787772 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:33.787934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.001324 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.060773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.282840 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.283511 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:34.502371 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:34.560627 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:34.783094 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:34.783413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.002904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.061777 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:35.283934 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.501100 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:35.560247 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:35.783592 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:35.784358 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.001812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.062616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.282087 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:36.282661 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.500966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:36.562267 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:36.783442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:36.783471 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.001767 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.061035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.282352 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.283181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:37.501481 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:37.562204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:37.782528 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:37.783035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.001204 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.060871 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.282324 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.283278 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:38.501823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:38.562308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:38.784023 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:38.784618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.000984 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.062203 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.282888 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.283474 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:39.502760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:39.563797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:39.782847 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:39.782939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.001158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.061550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.281624 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.282091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:40.501221 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:40.560905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:40.782931 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:40.782945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.002061 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.061582 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.283006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.283254 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:41.501580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:41.561026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:41.785372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:41.785518 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.001833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.064672 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.282529 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.283845 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:42.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:42.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:42.783728 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:42.784425 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.002525 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.061268 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.283438 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.283504 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:43.501326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:43.561048 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:43.782534 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:43.782716 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.001543 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.062385 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.282669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.283862 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:44.501191 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:44.562184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:44.782210 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:44.783841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.002615 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.282873 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:45.283074 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.501319 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:45.560538 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:45.781794 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:45.783447 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.002122 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.060715 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.282111 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:46.282760 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.501006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:46.560037 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:46.784753 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:46.784785 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.001157 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.060804 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.281941 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.283335 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:47.501734 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:47.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:47.782851 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:47.783119 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.001360 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.061016 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.282370 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:48.283342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.501709 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:48.560891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:48.783888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:48.784092 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.001883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.283083 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:49.283344 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.501731 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:49.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:49.782618 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:49.782681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.000966 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.060550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.283074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.284257 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:50.501643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:50.561462 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:50.783025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:50.783475 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.002569 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.063186 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.283275 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:51.283325 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.501455 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:51.560436 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:51.782975 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:51.783423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.001631 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.061667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.281818 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.282342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:52.501284 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:52.560864 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:52.782151 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:52.782348 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.007368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.060641 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.283706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.284276 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:53.501189 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:53.560654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:53.782398 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:53.782656 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.002682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.061286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.282383 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:54.283815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.501271 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:54.560549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:54.790530 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:54.790755 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.001308 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.061047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.284397 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.284413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:55.501771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:55.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:55.781963 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:55.782941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.000822 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.061650 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.283524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:56.283580 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.501667 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:56.560681 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:56.781684 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:56.782151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.083466 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.281690 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.283202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:57.501647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:57.561213 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:57.782612 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:57.782987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.001789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.282211 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:58.284618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.500839 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:58.561378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:58.784612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:58.784669 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.000744 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.062091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.660112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:22:59.664035 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:22:59.664534 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.665074 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:22:59.782692 1558425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:22:59.783576 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.003476 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.061094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.285714 1558425 kapi.go:107] duration metric: took 3m43.507242469s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0630 14:23:00.286859 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:00.502299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:00.561094 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:00.783440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.001892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.061673 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.283876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:01.501245 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:01.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:01.783169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.005689 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.061445 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.283736 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:02.501952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:02.560234 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:02.783177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.002017 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.061604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.283817 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:03.500854 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:03.561092 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:03.783701 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.001024 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.063589 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.283519 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:04.501728 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:04.566277 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:04.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.002269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:23:05.060852 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.283974 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:05.507100 1558425 kapi.go:107] duration metric: took 3m45.009344267s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0630 14:23:05.509228 1558425 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-301682 cluster.
	I0630 14:23:05.510978 1558425 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0630 14:23:05.512549 1558425 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0630 14:23:05.561380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:05.783374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.062392 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.283807 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:06.561684 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:06.785144 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.066028 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.284562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:07.561973 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:07.785021 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.060666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.283201 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:08.561745 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:08.783877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.061656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.284091 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:09.561492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:09.787449 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.062802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.284110 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:10.560730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:10.783003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.060643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.284380 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:11.561869 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:11.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.060853 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.283759 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:12.560457 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:12.784225 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.061224 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.283671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:13.560056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:13.783513 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.061509 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.283696 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:14.561206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:14.784675 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.061356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.284952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:15.560611 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:15.784123 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.061089 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.283173 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:16.561168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:16.786612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.061952 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.284288 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:17.561055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:17.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.061797 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.283435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:18.560968 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:18.783185 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.061655 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.285318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:19.561730 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:19.782858 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.061290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.284108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:20.560495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:20.783799 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.060435 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:21.560658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:21.784042 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.064259 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.283397 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:22.562304 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:22.783790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.062882 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:23.565989 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:23.783917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.061006 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.284421 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:24.561604 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:24.783815 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.060798 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.283106 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:25.572104 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:25.783229 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.061003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.283003 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:26.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:26.783676 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.061789 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.283647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:27.561595 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:27.784152 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.061056 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.284078 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:28.561025 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:28.782901 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.060975 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.284112 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:29.561034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:29.783332 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.060612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.284928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:30.560487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:30.784282 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.061202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.283691 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:31.561004 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:31.783682 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.061162 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.283339 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:32.561471 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:32.783951 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.060926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.283825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:33.563195 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:33.783726 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.060359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.283321 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:34.561124 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:34.783616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.061349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.283415 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:35.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:35.784344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.061159 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.283670 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:36.562677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:36.783294 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.062782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.284848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:37.560236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:37.783962 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.060039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.283768 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:38.560166 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:38.782740 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.060825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.284072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:39.561353 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:39.783269 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.061500 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.283553 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:40.561115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:40.784062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.061241 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.283888 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:41.560612 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:41.784453 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.061524 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.283887 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:42.560352 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:42.783080 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.060608 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.283756 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:43.561250 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:43.783439 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.061813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.284043 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:44.560423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:44.783723 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.062299 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.283512 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:45.562182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:45.783464 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.283290 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:46.561127 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:46.784143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.062746 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.283685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:47.561750 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:47.783610 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.061340 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.284254 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:48.561143 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:48.783030 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.060658 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.283841 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:49.561356 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:49.783263 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.061883 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:50.561440 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:50.783774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.060233 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.283243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:51.561692 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:51.783771 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.060778 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.283008 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:52.560248 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:52.784031 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.061426 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.284243 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:53.561964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:53.783354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.061484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.283980 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:54.560599 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:54.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.060942 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.284120 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:55.560825 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:55.782802 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.059964 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.283717 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:56.560585 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:56.784927 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.061040 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.283344 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:57.561904 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:57.783533 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.061374 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.284877 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:58.560774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:58.784163 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.061765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.284774 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:23:59.561857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:23:59.782773 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.061141 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.283396 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:00.561139 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:00.783625 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.283747 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:01.560949 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:01.783456 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.061482 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:02.560735 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:02.784827 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.061045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.282806 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:03.560671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:03.782706 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.060646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.283286 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:04.560657 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:04.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.061560 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.283579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:05.561242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:05.783654 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.061539 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.283732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:06.560228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:06.783593 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.061818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.283996 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:07.561190 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:07.783368 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.062755 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.283379 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:08.561279 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:08.783976 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.061115 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.285316 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:09.561149 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:09.783381 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.061707 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.284158 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:10.560899 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:10.783331 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.060911 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.285242 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:11.567687 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:11.783399 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.061770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.284164 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:12.561303 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:12.784575 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.062079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.283362 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:13.561544 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:13.784026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.061171 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.284055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:14.560334 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:14.784816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.061671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.285032 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:15.560810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:15.782955 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.060555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.283695 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:16.561223 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:16.784108 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.061443 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.283885 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:17.560716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:17.783754 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.061542 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.282788 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:18.560770 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:18.783579 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.060318 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.283045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:19.560843 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:19.782930 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.061222 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.282971 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:20.560677 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:20.783818 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.060551 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.283550 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:21.562179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:21.784378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.062214 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.283320 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:22.560609 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:22.783739 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.060891 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.283079 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:23.561022 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:23.783812 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.060803 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.283620 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:24.561450 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:24.784169 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.061522 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.283646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:25.561354 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:25.784907 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.061231 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.283357 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:26.561047 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:26.782954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.062644 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.283870 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:27.560460 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:27.783972 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.061026 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.283434 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:28.560383 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:28.784236 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.061863 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.283492 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:29.561072 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:29.784790 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.060929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.283116 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:30.560849 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:30.784365 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.061044 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.283485 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:31.560958 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:31.783343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.060933 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.283256 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:32.560785 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:32.783833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.063333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.283905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:33.561202 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:33.783647 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.060633 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.283403 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:34.561258 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:34.783824 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.061027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.283280 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:35.560614 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:35.783666 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.060343 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.283562 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:36.561179 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:36.783181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.061128 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.284062 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:37.560766 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:37.783336 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.061890 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.283765 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:38.561181 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:38.782988 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.061782 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.284045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:39.560892 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:39.783646 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.061732 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.283168 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:40.561039 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:40.783011 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.060663 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.284034 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:41.560401 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:41.783929 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.060886 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.283413 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:42.560898 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:42.783070 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.061272 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.284495 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:43.566045 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:43.785033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.060787 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.284857 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:44.563055 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:44.782917 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.062050 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.288461 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:45.560836 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:45.783182 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.060851 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.282596 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:46.561215 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:46.783686 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.061881 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.283430 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:47.561484 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:47.784227 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.061049 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.283508 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:48.560991 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:48.783228 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.061557 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.283945 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:49.560814 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:49.783480 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.062151 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.283328 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:50.561147 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:50.783624 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.061581 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.284088 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:51.561199 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:51.784000 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.060829 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.283475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:52.561084 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:52.783246 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.061297 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.283184 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:53.561060 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:53.783926 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.060947 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.284652 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:54.560498 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:54.783783 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.061342 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.284840 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:55.560442 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:55.791617 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.061618 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.286833 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:56.560475 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:56.783629 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.061136 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.283837 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:57.562671 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:57.783967 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.060688 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.283033 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:58.560616 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:58.783876 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.060565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.283359 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:24:59.561198 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:24:59.783494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.062642 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.283954 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:00.560177 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:00.782981 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.060549 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.283643 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:01.561232 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:01.783995 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.060913 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.283540 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:02.561001 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:02.783253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.061494 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.283619 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:03.561423 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:03.783816 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.061121 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.283938 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:04.560330 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:04.783093 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.061253 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.283468 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:05.561349 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:05.783656 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.061451 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.284555 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:06.561027 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:06.783118 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.060941 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.283486 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:07.560979 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:07.783987 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.061469 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.282865 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:08.560230 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:08.783905 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.060919 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.284341 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:09.561725 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:09.782920 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.061064 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.283364 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:10.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:10.783580 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.061012 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.282946 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:11.560317 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:11.783830 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.060685 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.283378 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:12.561716 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:12.782965 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.061099 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.282813 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:13.560694 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:13.783665 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.061372 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.282565 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:14.561326 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:14.783180 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.060939 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.283013 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:15.560848 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:15.783206 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.061333 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.283487 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:25:16.560928 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:16.779853 1558425 kapi.go:107] duration metric: took 6m0.000148464s to wait for kubernetes.io/minikube-addons=registry ...
	W0630 14:25:16.780114 1558425 out.go:270] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0630 14:25:17.061823 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:17.560570 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.061810 1558425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:25:18.557742 1558425 kapi.go:107] duration metric: took 6m0.000905607s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0630 14:25:18.557918 1558425 out.go:270] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0630 14:25:18.560047 1558425 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, ingress, gcp-auth
	I0630 14:25:18.561439 1558425 addons.go:514] duration metric: took 6m10.426236235s for enable addons: enabled=[cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots ingress gcp-auth]
	I0630 14:25:18.561506 1558425 start.go:246] waiting for cluster config update ...
	I0630 14:25:18.561537 1558425 start.go:255] writing updated cluster config ...
	I0630 14:25:18.561951 1558425 ssh_runner.go:195] Run: rm -f paused
	I0630 14:25:18.569844 1558425 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:18.574216 1558425 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.580161 1558425 pod_ready.go:94] pod "coredns-674b8bbfcf-gcxhf" is "Ready"
	I0630 14:25:18.580187 1558425 pod_ready.go:86] duration metric: took 5.939771ms for pod "coredns-674b8bbfcf-gcxhf" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.583580 1558425 pod_ready.go:83] waiting for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.589631 1558425 pod_ready.go:94] pod "etcd-addons-301682" is "Ready"
	I0630 14:25:18.589656 1558425 pod_ready.go:86] duration metric: took 6.047747ms for pod "etcd-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.592675 1558425 pod_ready.go:83] waiting for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.598838 1558425 pod_ready.go:94] pod "kube-apiserver-addons-301682" is "Ready"
	I0630 14:25:18.598865 1558425 pod_ready.go:86] duration metric: took 6.165834ms for pod "kube-apiserver-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.608664 1558425 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:18.974819 1558425 pod_ready.go:94] pod "kube-controller-manager-addons-301682" is "Ready"
	I0630 14:25:18.974852 1558425 pod_ready.go:86] duration metric: took 366.160564ms for pod "kube-controller-manager-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.183963 1558425 pod_ready.go:83] waiting for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.575199 1558425 pod_ready.go:94] pod "kube-proxy-cm28f" is "Ready"
	I0630 14:25:19.575240 1558425 pod_ready.go:86] duration metric: took 391.247311ms for pod "kube-proxy-cm28f" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:19.774681 1558425 pod_ready.go:83] waiting for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.173968 1558425 pod_ready.go:94] pod "kube-scheduler-addons-301682" is "Ready"
	I0630 14:25:20.174011 1558425 pod_ready.go:86] duration metric: took 399.300804ms for pod "kube-scheduler-addons-301682" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:25:20.174030 1558425 pod_ready.go:40] duration metric: took 1.603886991s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:25:20.223671 1558425 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 14:25:20.225538 1558425 out.go:177] * Done! kubectl is now configured to use "addons-301682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.856181085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41bbf4c0-4cf3-4f8a-8bd4-5fbe837c4279 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.856800435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cbc731321432449822253c4ca17cd5e3207a27d23928a611803a79728b3822f,PodSandboxId:43868af5a7e43fcff04d95b0e60c4b31b9e26b455c2e4032e94e4c1797966944,Metadata:&ContainerMetadata{Name:gadget,Attempt:4,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a,State:CONTAINER_EXITED,CreatedAt:1751293564376992829,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mrnh4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f033c8a2-1ce7-4009-8b24-756b9f31550e,},Annotations:map[string]string{io.kubernetes.container.hash: 1446b873,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernet
es.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17ff3ad029d095af0767c96cf2934d45854344cdf884ba4eed1a0f8bc867aba,PodSandboxId:d3c666dc5a318bdbe138538b35edf5456c54d5d4d7b255b7a49ad870612d5b47,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:6,},Image:&ImageSpec{Image:926061c8f6ec365514a52162138d8b0d1bf99777b6d967797d26eabc00d3a267,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:926061c8f6ec365514a52162138d8b0d1bf99777b6d967797d26eabc00d3a267,State:CONTAINER_EXITED,CreatedAt:1751293517729507962,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6d967984f9-l9lpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bcd520ac-b89d-4aa8-80a3-08fcea21e74
2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b34e390,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]stri
ng{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419
099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_R
UNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Co
ntainer{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196
-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9x
c5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisio
ner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886cabb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[s
tring]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,Create
dAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image
:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisio
ner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&I
mageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba8
24c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206
f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3
cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5
377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,Po
dSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41bbf4c0-4cf3-4f8a-8bd4-5fbe837c4279 name=/runtime.v1.RuntimeService/ListContain
ers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.897700372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1af47156-7839-405e-aaa0-46e2a1d44e2f name=/runtime.v1.RuntimeService/Version
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.897794787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1af47156-7839-405e-aaa0-46e2a1d44e2f name=/runtime.v1.RuntimeService/Version
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.899419103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81f69fba-f0ed-4a60-8cdf-799bbae8f036 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.901401561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293668901367540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81f69fba-f0ed-4a60-8cdf-799bbae8f036 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.902452100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5270952-d2b0-448a-b08a-26d3503d02e5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.902513850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5270952-d2b0-448a-b08a-26d3503d02e5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.903092331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cbc731321432449822253c4ca17cd5e3207a27d23928a611803a79728b3822f,PodSandboxId:43868af5a7e43fcff04d95b0e60c4b31b9e26b455c2e4032e94e4c1797966944,Metadata:&ContainerMetadata{Name:gadget,Attempt:4,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a,State:CONTAINER_EXITED,CreatedAt:1751293564376992829,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mrnh4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f033c8a2-1ce7-4009-8b24-756b9f31550e,},Annotations:map[string]string{io.kubernetes.container.hash: 1446b873,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernet
es.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17ff3ad029d095af0767c96cf2934d45854344cdf884ba4eed1a0f8bc867aba,PodSandboxId:d3c666dc5a318bdbe138538b35edf5456c54d5d4d7b255b7a49ad870612d5b47,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:6,},Image:&ImageSpec{Image:926061c8f6ec365514a52162138d8b0d1bf99777b6d967797d26eabc00d3a267,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:926061c8f6ec365514a52162138d8b0d1bf99777b6d967797d26eabc00d3a267,State:CONTAINER_EXITED,CreatedAt:1751293517729507962,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6d967984f9-l9lpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bcd520ac-b89d-4aa8-80a3-08fcea21e74
2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b34e390,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]stri
ng{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419
099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_R
UNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Co
ntainer{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196
-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9x
c5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisio
ner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886cabb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[s
tring]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,Create
dAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image
:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisio
ner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&I
mageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba8
24c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206
f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3
cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5
377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,Po
dSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5270952-d2b0-448a-b08a-26d3503d02e5 name=/runtime.v1.RuntimeService/ListContain
ers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.943742711Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cf3e437-aa21-4299-b305-25a5e46d1be3 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.943932463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cf3e437-aa21-4299-b305-25a5e46d1be3 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.945464234Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fbfcd12-df3a-4a42-b9df-1af794daa280 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.947517857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293668947452385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fbfcd12-df3a-4a42-b9df-1af794daa280 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.948625754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=749404c3-1fcd-481a-ae8c-fa40df09f412 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.948701231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=749404c3-1fcd-481a-ae8c-fa40df09f412 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.949223646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cbc731321432449822253c4ca17cd5e3207a27d23928a611803a79728b3822f,PodSandboxId:43868af5a7e43fcff04d95b0e60c4b31b9e26b455c2e4032e94e4c1797966944,Metadata:&ContainerMetadata{Name:gadget,Attempt:4,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a,State:CONTAINER_EXITED,CreatedAt:1751293564376992829,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mrnh4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f033c8a2-1ce7-4009-8b24-756b9f31550e,},Annotations:map[string]string{io.kubernetes.container.hash: 1446b873,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernet
es.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17ff3ad029d095af0767c96cf2934d45854344cdf884ba4eed1a0f8bc867aba,PodSandboxId:d3c666dc5a318bdbe138538b35edf5456c54d5d4d7b255b7a49ad870612d5b47,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:6,},Image:&ImageSpec{Image:926061c8f6ec365514a52162138d8b0d1bf99777b6d967797d26eabc00d3a267,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:926061c8f6ec365514a52162138d8b0d1bf99777b6d967797d26eabc00d3a267,State:CONTAINER_EXITED,CreatedAt:1751293517729507962,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6d967984f9-l9lpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bcd520ac-b89d-4aa8-80a3-08fcea21e74
2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b34e390,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]stri
ng{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419
099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_R
UNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Co
ntainer{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196
-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9x
c5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisio
ner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886cabb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[s
tring]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,Create
dAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image
:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisio
ner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&I
mageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba8
24c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206
f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3
cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5
377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,Po
dSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=749404c3-1fcd-481a-ae8c-fa40df09f412 name=/runtime.v1.RuntimeService/ListContain
ers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.987049033Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3107c0c-be65-4cfd-a918-b7a971c8a427 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.987140872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3107c0c-be65-4cfd-a918-b7a971c8a427 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.988624439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87b690ad-9cff-47fb-b221-f0c963261c38 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.989716719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293668989693003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87b690ad-9cff-47fb-b221-f0c963261c38 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.990307136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aefcba50-1bb4-495e-bf79-efe51405b63d name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.990374907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aefcba50-1bb4-495e-bf79-efe51405b63d name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:27:48 addons-301682 crio[849]: time="2025-06-30 14:27:48.991034569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0cbc731321432449822253c4ca17cd5e3207a27d23928a611803a79728b3822f,PodSandboxId:43868af5a7e43fcff04d95b0e60c4b31b9e26b455c2e4032e94e4c1797966944,Metadata:&ContainerMetadata{Name:gadget,Attempt:4,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a,State:CONTAINER_EXITED,CreatedAt:1751293564376992829,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mrnh4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f033c8a2-1ce7-4009-8b24-756b9f31550e,},Annotations:map[string]string{io.kubernetes.container.hash: 1446b873,io.kubernetes.container.preStopHandler: {\"exec\":{\
"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb1fec83c55c48e28234f7cd8d03ef742a07609b60219be6bac7d10edefa31a,PodSandboxId:744d3a8558a5139f373861c4e488f7ba0b5cf73472ed4f3f8dffdd2bf1bedc89,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1751293524748765738,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a7b88ec8-b589-45fc-8044-8377751c36ab,},Annotations:map[string]string{io.kubernet
es.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4356fb8a203d9cc624e7f3e7b890aaa91e5effc2b429bb2d8ca7889b82e95a8,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1751293518334360943,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17ff3ad029d095af0767c96cf2934d45854344cdf884ba4eed1a0f8bc867aba,PodSandboxId:d3c666dc5a318bdbe138538b35edf5456c54d5d4d7b255b7a49ad870612d5b47,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:6,},Image:&ImageSpec{Image:926061c8f6ec365514a52162138d8b0d1bf99777b6d967797d26eabc00d3a267,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:926061c8f6ec365514a52162138d8b0d1bf99777b6d967797d26eabc00d3a267,State:CONTAINER_EXITED,CreatedAt:1751293517729507962,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6d967984f9-l9lpc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bcd520ac-b89d-4aa8-80a3-08fcea21e74
2,},Annotations:map[string]string{io.kubernetes.container.hash: 1b34e390,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ec6a97e3e1661556501f3f5943d1b6021bcdca5c2a3fe75a137e6acee4ef4,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1751293485055594437,Labels:map[string]stri
ng{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8810b68e820601a83e45875bbe1191262dc1bc9efe38c6ee62f17c2d9c52c2,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1751293419
099332756,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977ef3af7745673830053d1e1607963e60edda63b17556ef1ca342e7cab68c9c,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_R
UNNING,CreatedAt:1751293386104153023,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12db79e5b741e0e1b29fa66013811c40f96cff48910509dfed89c831c60258c6,PodSandboxId:e27c33843e336f94294367d335bc0b847329f5bd9c9478caf30b310257fc28d1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:158e2f2d90f2171e72d1eff54855d96dca71c4f3223d47d5d823fdee6fd256d8,State:CONTAINER_RUNNING,CreatedAt:1751293379859947403,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-67687b59dd-hqql8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9070bbe-a9fc-4824-80c8-ce86fc11c62f,},Annotations:map[string]string{io.kubernetes.container.hash: 1ad45e09,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Co
ntainer{Id:5dfe9d02b1b1a4dacc490a2f69ce931b9498ca6e0596999969afbe9efa2c616b,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1751293341278237160,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:470ef449849e91be828dce58e7a3ec6d7ea0cc28e94a5fb71c40a46f2a1d6515,PodSandboxId:4736a1c095805d641e5953bbd728e374d1a3db2d3c52383ebb89de45644a1e62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293276499233017,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-m97pd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae2714e2-0217-4232-b42e-01638039151d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d1724e2a8e9857eed3c9736578bd80039118b9960ea4d59f45725d2484435f,PodSandboxId:51d81b5aefa46ab178d49f03094d383e83574be148039a54acad118421676af6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1751293276379342188,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-h4qg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3579c14e-b1af-44b4-80e1-727abf294d50,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089511c925cdb142e956b84b457f0db13cc38987e5cfc74dd8b149d2901302ca,PodSandboxId:901b27bd18ec3115b551d0c45d9c52b1169edc817e9d8581361dc87300b4c689,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1751293274331309180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-zvnk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7994b044-5196
-43e0-a92d-5a3ae4166a54,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e8c85ce81515924715b31c231867975a89efbdce27306df47d0d61f04fc685,PodSandboxId:754958dc28d197beba983e9989dafb418bf499e8eb9623efe3b34533ad477be7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1751293272804133303,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 26c41ba0-a3e3-474e-a7b7-bcc9457de690,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49554ce7e85de90c96f5817881f8e63bbcaf45b01fea4a38db35038e0a7550,PodSandboxId:ef302c090f9a89672485967df8e610f09d5eea3ad3a913ee1cfe8b86a3d96d15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1751293271018828443,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f4bf6ed5-543f-4d1a-9765-d8a902462306,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d53c20b85a8a392204f3bf534e0254d96ea5c1c01b0b436800608510ee89e2,PodSandboxId:4e975a881fa17a33473509638ce8fe8bf0949042d99eea312404e9d05f34deab,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293232306284653,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9x
c5z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e4667439-a0d5-44ae-a665-8b790e04d2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2714de6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1322675057a2e6412c573ee31f8aa99f216606fbf4c74a1d601f5c95b6c16140,PodSandboxId:54b7dce23ad653f98cd0c048862ea16836bde856c459c1b297c3407cb9c955c0,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1751293232167207230,Labels:map[string]string{io.kubernetes.container.name: local-path-provisio
ner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-gzp6b,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8ce9727f-c71b-4d6c-99c4-efe886cabb17,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394bba22fffdde821de654bc3b0a362a6f24fc6c68deb6d40cb1eca5b765aef,PodSandboxId:7cdcf7a057d5ab2e0adf4f2707500f155ac60fb884462ef0e53a1cf8dab1a94f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0ea86a086203367db3e76dfbaf10b334e274b2af5d2c56fc70e0222e83ba0400,State:CONTAINER_EXITED,CreatedAt:1751293226247136144,Labels:map[s
tring]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fnqjq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a82da282-056e-49ac-84bf-65ba99842cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81132f0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b37034569df08949dbe508dc4c0a264198a3646b6537708b4482130a4eb095,PodSandboxId:ab80df45e204ecca0616649d66d887aadaba18f3a612d511bd4a5dae1087ee8f,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,Create
dAt:1751293224537347353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-2dgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b452b4b-9d13-4540-ab29-ec9dc9211e75,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5b14e1bc43a77be2968268faa09d70246a4d73b4eee573978c6b4a9d7fbfe,PodSandboxId:7f285ffa7ac9cc3cbd1cefb10698eee8c745940148034044f85d2ff8d9941786,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1751293187595639610,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688d2765-af4d-40da-a2a8-a18c0936a24d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d635c9d667c7678651a122f73299976a2b2dbb224c8282b8b61bcbacab4177,PodSandboxId:3d37e16d91d2bdd9d7a24cfd0691432a1a998d502e2dfc2f58e4d7c4e1726a6b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image
:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1751293157842468359,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-g5z6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df18eec1-4314-4045-804d-b82424676c71,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4,PodSandboxId:97a7ca87e0fdb0c43510e28c780e66e8415de643ed3274f35bfadd1ae539f177,Metadata:&ContainerMetadata{Name:storage-provisio
ner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751293157351050427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cf7ffa-1e9d-4045-ba8c-26713b592bee,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2,PodSandboxId:78956e77203cb1a5cb105ff68e8b29fcd0f957a431ebae6b268cbea3b30ca0c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&I
mageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751293150265474431,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-gcxhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89022f36-ce77-49a7-a13b-77ae0fd99bbc,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10061ba8
24c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b,PodSandboxId:b60868a950e81d99a2e8511ad9a390755ca4d17d25d44d54157819ac82267880,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751293149240606351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cm28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4192237-41bc-4541-b487-a9003f16fc0d,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206
f6a1b219bec9dcccef936a,PodSandboxId:3b49e7f986574761757cc283780091ebe65cf579383699825fee3ff1266cad26,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751293138242086482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49c4f62c290c365bec7ff0640a449b10,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3
cdfe0,PodSandboxId:793d3507bd395caf531933e0f14a1162a9b998f9c5e169fe596e4a170da73626,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751293138186585913,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134de6357a0cabc5d6163fa863f0498b,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5
377197a570d627,PodSandboxId:d882c0c670fcea928ef58c5f95272c77b5b48aca3f4c78ca96e6711ef6076140,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751293138109105799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18febec5a694825da083caa9dce34a0,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb,Po
dSandboxId:ecf8d198683c7ced8c4c876fe6ad6ad7ffa62f34c56eae957afda2791163200f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751293138149067777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-301682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beafcd19697a733d4adf3b9d67a4707e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aefcba50-1bb4-495e-bf79-efe51405b63d name=/runtime.v1.RuntimeService/ListContain
ers
	Jun 30 14:27:49 addons-301682 crio[849]: time="2025-06-30 14:27:49.016501733Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=aac92765-5638-4f12-9dc9-3a8bb1c96c18 name=/runtime.v1.RuntimeService/Status
	Jun 30 14:27:49 addons-301682 crio[849]: time="2025-06-30 14:27:49.016622199Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=aac92765-5638-4f12-9dc9-3a8bb1c96c18 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	0cbc731321432       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469                            About a minute ago   Exited              gadget                                   4                   43868af5a7e43       gadget-mrnh4
	ccb1fec83c55c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago        Running             busybox                                  0                   744d3a8558a51       busybox
	f4356fb8a203d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago        Running             csi-snapshotter                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	a17ff3ad029d0       926061c8f6ec365514a52162138d8b0d1bf99777b6d967797d26eabc00d3a267                                                                             2 minutes ago        Exited              cloud-spanner-emulator                   6                   d3c666dc5a318       cloud-spanner-emulator-6d967984f9-l9lpc
	505ec6a97e3e1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	0e8810b68e820       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            4 minutes ago        Running             liveness-probe                           0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	977ef3af77456       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           4 minutes ago        Running             hostpath                                 0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	12db79e5b741e       registry.k8s.io/ingress-nginx/controller@sha256:aadad8e26329d345dea3a69b8deb9f3c52899a97cbaf7e702b8dfbeae3082c15                             4 minutes ago        Running             controller                               0                   e27c33843e336       ingress-nginx-controller-67687b59dd-hqql8
	5dfe9d02b1b1a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                5 minutes ago        Running             node-driver-registrar                    0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	470ef449849e9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago        Running             volume-snapshot-controller               0                   4736a1c095805       snapshot-controller-68b874b76f-m97pd
	90d1724e2a8e9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago        Running             csi-external-health-monitor-controller   0                   51d81b5aefa46       csi-hostpathplugin-h4qg2
	089511c925cdb       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago        Running             volume-snapshot-controller               0                   901b27bd18ec3       snapshot-controller-68b874b76f-zvnk2
	c2e8c85ce8151       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago        Running             csi-resizer                              0                   754958dc28d19       csi-hostpath-resizer-0
	ba49554ce7e85       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago        Running             csi-attacher                             0                   ef302c090f9a8       csi-hostpath-attacher-0
	78d53c20b85a8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf                   7 minutes ago        Exited              patch                                    0                   4e975a881fa17       ingress-nginx-admission-patch-9xc5z
	1322675057a2e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago        Running             local-path-provisioner                   0                   54b7dce23ad65       local-path-provisioner-76f89f99b5-gzp6b
	8394bba22fffd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:21cf5292cb6a8aa60c83dbfbbb06b91d7139931b979d49c525687d5724c58ddf                   7 minutes ago        Exited              create                                   0                   7cdcf7a057d5a       ingress-nginx-admission-create-fnqjq
	87b37034569df       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              7 minutes ago        Running             registry-proxy                           0                   ab80df45e204e       registry-proxy-2dgr9
	aca5b14e1bc43       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             8 minutes ago        Running             minikube-ingress-dns                     0                   7f285ffa7ac9c       kube-ingress-dns-minikube
	70d635c9d667c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     8 minutes ago        Running             amd-gpu-device-plugin                    0                   3d37e16d91d2b       amd-gpu-device-plugin-g5z6w
	f3766ac202b89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago        Running             storage-provisioner                      0                   97a7ca87e0fdb       storage-provisioner
	5aadabb8b1bfc       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                                                             8 minutes ago        Running             coredns                                  0                   78956e77203cb       coredns-674b8bbfcf-gcxhf
	f10061ba824c0       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                                                             8 minutes ago        Running             kube-proxy                               0                   b60868a950e81       kube-proxy-cm28f
	ccc99095a0e73       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e                                                                             8 minutes ago        Running             kube-apiserver                           0                   3b49e7f986574       kube-apiserver-addons-301682
	b4d0fe15b4640       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                                                             8 minutes ago        Running             kube-controller-manager                  0                   793d3507bd395       kube-controller-manager-addons-301682
	a117b554832ef       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                                                             8 minutes ago        Running             etcd                                     0                   ecf8d198683c7       etcd-addons-301682
	4e556fe1e25cc       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                                                             8 minutes ago        Running             kube-scheduler                           0                   d882c0c670fce       kube-scheduler-addons-301682
	
	
	==> coredns [5aadabb8b1bfca262936a220645b0a15a878220838907964634c52ea0ba0e8d2] <==
	[INFO] 10.244.0.7:58189 - 6949 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000491441s
	[INFO] 10.244.0.7:60042 - 16328 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000229979s
	[INFO] 10.244.0.7:60042 - 4481 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000132031s
	[INFO] 10.244.0.7:60042 - 36051 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000105956s
	[INFO] 10.244.0.7:60042 - 7821 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000112592s
	[INFO] 10.244.0.7:60042 - 14680 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000118629s
	[INFO] 10.244.0.7:60042 - 922 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000102727s
	[INFO] 10.244.0.7:60042 - 12936 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00010126s
	[INFO] 10.244.0.7:60042 - 1568 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000095126s
	[INFO] 10.244.0.7:39891 - 34990 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000210019s
	[INFO] 10.244.0.7:39891 - 39873 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000252132s
	[INFO] 10.244.0.7:39891 - 65440 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000366855s
	[INFO] 10.244.0.7:39891 - 38219 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000824346s
	[INFO] 10.244.0.7:39891 - 6010 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000193472s
	[INFO] 10.244.0.7:39891 - 8093 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000114313s
	[INFO] 10.244.0.7:39891 - 55450 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.001533122s
	[INFO] 10.244.0.7:39891 - 7156 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000105631s
	[INFO] 10.244.0.7:47694 - 41755 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000240353s
	[INFO] 10.244.0.7:47694 - 30504 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000183725s
	[INFO] 10.244.0.7:47694 - 26717 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000163918s
	[INFO] 10.244.0.7:47694 - 14699 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000151125s
	[INFO] 10.244.0.7:47694 - 60293 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000196812s
	[INFO] 10.244.0.7:47694 - 41891 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000142948s
	[INFO] 10.244.0.7:47694 - 43263 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000216118s
	[INFO] 10.244.0.7:47694 - 55867 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000121659s
	
	
	==> describe nodes <==
	Name:               addons-301682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-301682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=addons-301682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_19_04_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-301682
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-301682"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:19:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-301682
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:27:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:18:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:27:15 +0000   Mon, 30 Jun 2025 14:19:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    addons-301682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3f7748b45e54c5d95a766f7ac118097
	  System UUID:                c3f7748b-45e5-4c5d-95a7-66f7ac118097
	  Boot ID:                    4dcad91c-eb4d-46c9-ae52-10be6c00fd59
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  default                     cloud-spanner-emulator-6d967984f9-l9lpc                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  gadget                      gadget-mrnh4                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  ingress-nginx               ingress-nginx-controller-67687b59dd-hqql8                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         8m33s
	  kube-system                 amd-gpu-device-plugin-g5z6w                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 coredns-674b8bbfcf-gcxhf                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m41s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 csi-hostpathplugin-h4qg2                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 etcd-addons-301682                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m46s
	  kube-system                 kube-apiserver-addons-301682                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-controller-manager-addons-301682                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-proxy-cm28f                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m41s
	  kube-system                 kube-scheduler-addons-301682                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 registry-694bd45846-x8cnn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 registry-creds-6b69cdcdd5-n9cld                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 registry-proxy-2dgr9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 snapshot-controller-68b874b76f-m97pd                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 snapshot-controller-68b874b76f-zvnk2                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  local-path-storage          helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0    0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  local-path-storage          local-path-provisioner-76f89f99b5-gzp6b                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m34s
	  yakd-dashboard              yakd-dashboard-575dd5996b-cwpg5                               0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     8m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m39s                  kube-proxy       
	  Normal  Starting                 8m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m52s (x8 over 8m52s)  kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m52s (x8 over 8m52s)  kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m52s (x7 over 8m52s)  kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m46s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m46s                  kubelet          Node addons-301682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m46s                  kubelet          Node addons-301682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m46s                  kubelet          Node addons-301682 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m45s                  kubelet          Node addons-301682 status is now: NodeReady
	  Normal  RegisteredNode           8m42s                  node-controller  Node addons-301682 event: Registered Node addons-301682 in Controller
	
	
	==> dmesg <==
	[  +0.107780] kauditd_printk_skb: 74 callbacks suppressed
	[Jun30 14:19] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.870598] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.166813] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.030230] kauditd_printk_skb: 118 callbacks suppressed
	[  +3.981178] kauditd_printk_skb: 99 callbacks suppressed
	[ +14.133007] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.888041] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 14:20] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.101498] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:21] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.564016] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000063] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.018820] kauditd_printk_skb: 4 callbacks suppressed
	[Jun30 14:22] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.468740] kauditd_printk_skb: 33 callbacks suppressed
	[Jun30 14:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.720029] kauditd_printk_skb: 37 callbacks suppressed
	[Jun30 14:25] kauditd_printk_skb: 33 callbacks suppressed
	[  +3.578772] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.590938] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.177192] kauditd_printk_skb: 20 callbacks suppressed
	[Jun30 14:26] kauditd_printk_skb: 4 callbacks suppressed
	[ +46.460054] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:27] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [a117b554832ef1ab870ae7ea2e6f6cf78f8ec3b9274a5a824cb1e067df4a8ecb] <==
	{"level":"info","ts":"2025-06-30T14:20:36.858201Z","caller":"traceutil/trace.go:171","msg":"trace[1045741122] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"168.929858ms","start":"2025-06-30T14:20:36.689259Z","end":"2025-06-30T14:20:36.858189Z","steps":["trace[1045741122] 'process raft request'  (duration: 168.657811ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:21:16.253305Z","caller":"traceutil/trace.go:171","msg":"trace[684793743] linearizableReadLoop","detail":"{readStateIndex:1243; appliedIndex:1242; }","duration":"258.158507ms","start":"2025-06-30T14:21:15.995119Z","end":"2025-06-30T14:21:16.253277Z","steps":["trace[684793743] 'read index received'  (duration: 257.90548ms)","trace[684793743] 'applied index is now lower than readState.Index'  (duration: 252.632µs)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T14:21:16.254641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.198227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:21:16.254726Z","caller":"traceutil/trace.go:171","msg":"trace[347540210] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"200.343691ms","start":"2025-06-30T14:21:16.054373Z","end":"2025-06-30T14:21:16.254716Z","steps":["trace[347540210] 'agreement among raft nodes before linearized reading'  (duration: 200.191188ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:21:16.254998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.889254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:21:16.255051Z","caller":"traceutil/trace.go:171","msg":"trace[2072353184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"259.964064ms","start":"2025-06-30T14:21:15.995079Z","end":"2025-06-30T14:21:16.255043Z","steps":["trace[2072353184] 'agreement among raft nodes before linearized reading'  (duration: 259.892612ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:21:16.256094Z","caller":"traceutil/trace.go:171","msg":"trace[752785918] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"419.629539ms","start":"2025-06-30T14:21:15.836340Z","end":"2025-06-30T14:21:16.255969Z","steps":["trace[752785918] 'process raft request'  (duration: 416.770167ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:21:16.256259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:21:15.836292Z","time spent":"419.882706ms","remote":"127.0.0.1:55816","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1189 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-06-30T14:22:57.074171Z","caller":"traceutil/trace.go:171","msg":"trace[97580462] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"235.032412ms","start":"2025-06-30T14:22:56.839110Z","end":"2025-06-30T14:22:57.074143Z","steps":["trace[97580462] 'process raft request'  (duration: 234.613297ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.462692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650406Z","caller":"traceutil/trace.go:171","msg":"trace[1036457483] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"155.081366ms","start":"2025-06-30T14:22:59.495275Z","end":"2025-06-30T14:22:59.650356Z","steps":["trace[1036457483] 'range keys from in-memory index tree'  (duration: 154.411147ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:22:59.650586Z","caller":"traceutil/trace.go:171","msg":"trace[806257844] transaction","detail":"{read_only:false; response_revision:1386; number_of_response:1; }","duration":"115.895314ms","start":"2025-06-30T14:22:59.534680Z","end":"2025-06-30T14:22:59.650576Z","steps":["trace[806257844] 'process raft request'  (duration: 113.707335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.649782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"485.393683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.650888Z","caller":"traceutil/trace.go:171","msg":"trace[707366630] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1385; }","duration":"486.585604ms","start":"2025-06-30T14:22:59.164295Z","end":"2025-06-30T14:22:59.650881Z","steps":["trace[707366630] 'range keys from in-memory index tree'  (duration: 485.334873ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.650922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.164282Z","time spent":"486.621786ms","remote":"127.0.0.1:55612","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"374.09899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651010Z","caller":"traceutil/trace.go:171","msg":"trace[926388769] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"375.285797ms","start":"2025-06-30T14:22:59.275719Z","end":"2025-06-30T14:22:59.651005Z","steps":["trace[926388769] 'range keys from in-memory index tree'  (duration: 374.055569ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.275706Z","time spent":"375.316283ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.573265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651095Z","caller":"traceutil/trace.go:171","msg":"trace[444156936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1385; }","duration":"374.826279ms","start":"2025-06-30T14:22:59.276264Z","end":"2025-06-30T14:22:59.651090Z","steps":["trace[444156936] 'range keys from in-memory index tree'  (duration: 373.54342ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:22:59.651111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T14:22:59.276255Z","time spent":"374.850773ms","remote":"127.0.0.1:55832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-06-30T14:22:59.649971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.221471ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:22:59.651162Z","caller":"traceutil/trace.go:171","msg":"trace[72079455] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1385; }","duration":"136.411789ms","start":"2025-06-30T14:22:59.514744Z","end":"2025-06-30T14:22:59.651156Z","steps":["trace[72079455] 'range keys from in-memory index tree'  (duration: 135.196228ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:25:50.156282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.241875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-06-30T14:25:50.156408Z","caller":"traceutil/trace.go:171","msg":"trace[1656189336] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1889; }","duration":"105.429353ms","start":"2025-06-30T14:25:50.050958Z","end":"2025-06-30T14:25:50.156387Z","steps":["trace[1656189336] 'range keys from in-memory index tree'  (duration: 105.167742ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:27:49 up 9 min,  0 users,  load average: 0.29, 0.87, 0.64
	Linux addons-301682 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ccc99095a0e7387a7ca923fbd4ad4e5eb360e23206f6a1b219bec9dcccef936a] <==
	E0630 14:20:17.018990       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0630 14:20:17.019073       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0630 14:20:17.020266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0630 14:20:17.020272       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0630 14:20:30.566598       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.249.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.249.255:443: connect: connection refused" logger="UnhandledError"
	W0630 14:20:30.568692       1 handler_proxy.go:99] no RequestInfo found in the context
	E0630 14:20:30.568788       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0630 14:20:30.592794       1 handler.go:288] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0630 14:20:30.602722       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0630 14:25:32.039384       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43658: use of closed network connection
	E0630 14:25:32.235328       1 conn.go:339] Error on socket receive: read tcp 192.168.39.227:8443->192.168.39.1:43690: use of closed network connection
	I0630 14:25:35.327796       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:40.911437       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0630 14:25:41.137079       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.71.181"}
	I0630 14:25:41.142822       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:25:41.721263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.215.125"}
	I0630 14:25:47.346218       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:26:31.606219       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0630 14:27:03.338971       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [b4d0fe15b46400813b1ffa3645e392135495ee9a571e837affd1125b5b3cdfe0] <==
	I0630 14:19:07.734669       1 shared_informer.go:357] "Caches are synced" controller="PV protection"
	I0630 14:19:07.775861       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 14:19:07.784976       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 14:19:07.862950       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0630 14:19:07.863179       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0630 14:19:07.863332       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-301682"
	I0630 14:19:07.863451       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0630 14:19:07.938753       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:19:07.942958       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:19:08.378410       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:19:08.381277       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:19:08.381295       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:19:08.381303       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0630 14:19:37.948324       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0630 14:19:37.949721       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="traces.gadget.kinvolk.io"
	I0630 14:19:37.949773       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0630 14:19:37.949832       1 shared_informer.go:350] "Waiting for caches to sync" controller="resource quota"
	I0630 14:19:38.050496       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:19:38.384965       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0630 14:19:38.390227       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0630 14:19:38.491972       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0630 14:20:08.056813       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0630 14:20:08.499327       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0630 14:25:45.545454       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0630 14:27:13.514636       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [f10061ba824c0af74178f2765f922aa273089092a26ae09ed5f72f813997681b] <==
	E0630 14:19:09.616075       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:19:09.628197       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0630 14:19:09.628280       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:19:09.728584       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:19:09.728641       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:19:09.728663       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:19:09.760004       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:19:09.760419       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:19:09.760431       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:19:09.761800       1 config.go:199] "Starting service config controller"
	I0630 14:19:09.761820       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:19:09.764743       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:19:09.764796       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:19:09.764830       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:19:09.764834       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:19:09.770113       1 config.go:329] "Starting node config controller"
	I0630 14:19:09.770142       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:19:09.862889       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:19:09.865227       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:19:09.865265       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:19:09.870697       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [4e556fe1e25cc9c3c68e2987b595ab1ea247af48b4b15dc6b5377197a570d627] <==
	E0630 14:19:00.996185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:19:00.996326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:19:00.996316       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:00.996403       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:00.996618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:00.996471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:00.998826       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:00.999006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:01.002700       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.002834       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:01.865362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:19:01.884714       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:19:01.908759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:19:01.937379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:19:01.938367       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:19:01.983087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:19:02.032891       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:19:02.058487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:19:02.131893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:19:02.191157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:19:02.310584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:19:02.326588       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:19:02.381605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0630 14:19:04.769814       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:27:19 addons-301682 kubelet[1543]: I0630 14:27:19.705261    1543 scope.go:117] "RemoveContainer" containerID="a17ff3ad029d095af0767c96cf2934d45854344cdf884ba4eed1a0f8bc867aba"
	Jun 30 14:27:19 addons-301682 kubelet[1543]: E0630 14:27:19.705500    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloud-spanner-emulator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cloud-spanner-emulator pod=cloud-spanner-emulator-6d967984f9-l9lpc_default(bcd520ac-b89d-4aa8-80a3-08fcea21e742)\"" pod="default/cloud-spanner-emulator-6d967984f9-l9lpc" podUID="bcd520ac-b89d-4aa8-80a3-08fcea21e742"
	Jun 30 14:27:21 addons-301682 kubelet[1543]: I0630 14:27:21.701476    1543 scope.go:117] "RemoveContainer" containerID="0cbc731321432449822253c4ca17cd5e3207a27d23928a611803a79728b3822f"
	Jun 30 14:27:21 addons-301682 kubelet[1543]: E0630 14:27:21.702054    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-mrnh4_gadget(f033c8a2-1ce7-4009-8b24-756b9f31550e)\"" pod="gadget/gadget-mrnh4" podUID="f033c8a2-1ce7-4009-8b24-756b9f31550e"
	Jun 30 14:27:24 addons-301682 kubelet[1543]: E0630 14:27:24.012101    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293644011885341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:27:24 addons-301682 kubelet[1543]: E0630 14:27:24.012139    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293644011885341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:27:26 addons-301682 kubelet[1543]: E0630 14:27:26.047779    1543 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Jun 30 14:27:26 addons-301682 kubelet[1543]: E0630 14:27:26.048256    1543 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042a3494-2e07-4ce8-b9f8-7d37cf08138d-gcr-creds podName:042a3494-2e07-4ce8-b9f8-7d37cf08138d nodeName:}" failed. No retries permitted until 2025-06-30 14:29:28.04822335 +0000 UTC m=+624.504522830 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/042a3494-2e07-4ce8-b9f8-7d37cf08138d-gcr-creds") pod "registry-creds-6b69cdcdd5-n9cld" (UID: "042a3494-2e07-4ce8-b9f8-7d37cf08138d") : secret "registry-creds-gcr" not found
	Jun 30 14:27:27 addons-301682 kubelet[1543]: E0630 14:27:27.766447    1543 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7"
	Jun 30 14:27:27 addons-301682 kubelet[1543]: E0630 14:27:27.766515    1543 kuberuntime_image.go:42] "Failed to pull image" err="reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7"
	Jun 30 14:27:27 addons-301682 kubelet[1543]: E0630 14:27:27.766844    1543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry,Image:docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:5000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:REGISTRY_STORAGE_DELETE_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25znc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessageP
olicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-694bd45846-x8cnn_kube-system(7abfe955-5483-43f9-ad73-92df930e353e): ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:27:27 addons-301682 kubelet[1543]: E0630 14:27:27.768995    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ErrImagePull: \"reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:27:33 addons-301682 kubelet[1543]: I0630 14:27:33.697240    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-6d967984f9-l9lpc" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:27:33 addons-301682 kubelet[1543]: I0630 14:27:33.697956    1543 scope.go:117] "RemoveContainer" containerID="a17ff3ad029d095af0767c96cf2934d45854344cdf884ba4eed1a0f8bc867aba"
	Jun 30 14:27:33 addons-301682 kubelet[1543]: E0630 14:27:33.698144    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloud-spanner-emulator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cloud-spanner-emulator pod=cloud-spanner-emulator-6d967984f9-l9lpc_default(bcd520ac-b89d-4aa8-80a3-08fcea21e742)\"" pod="default/cloud-spanner-emulator-6d967984f9-l9lpc" podUID="bcd520ac-b89d-4aa8-80a3-08fcea21e742"
	Jun 30 14:27:34 addons-301682 kubelet[1543]: E0630 14:27:34.013940    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293654013583953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:27:34 addons-301682 kubelet[1543]: E0630 14:27:34.013976    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293654013583953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:27:36 addons-301682 kubelet[1543]: I0630 14:27:36.695277    1543 scope.go:117] "RemoveContainer" containerID="0cbc731321432449822253c4ca17cd5e3207a27d23928a611803a79728b3822f"
	Jun 30 14:27:39 addons-301682 kubelet[1543]: I0630 14:27:39.695750    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-x8cnn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:27:39 addons-301682 kubelet[1543]: E0630 14:27:39.697274    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7\\\": ErrImagePull: reading manifest sha256:1fc7de654f2ac1247f0b67e8a459e273b0993be7d2beda1f3f56fbf1001ed3e7 in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-694bd45846-x8cnn" podUID="7abfe955-5483-43f9-ad73-92df930e353e"
	Jun 30 14:27:44 addons-301682 kubelet[1543]: E0630 14:27:44.027124    1543 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293664026840558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:27:44 addons-301682 kubelet[1543]: E0630 14:27:44.027170    1543 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751293664026840558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:459307,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:27:48 addons-301682 kubelet[1543]: I0630 14:27:48.695837    1543 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-6d967984f9-l9lpc" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:27:48 addons-301682 kubelet[1543]: I0630 14:27:48.695902    1543 scope.go:117] "RemoveContainer" containerID="a17ff3ad029d095af0767c96cf2934d45854344cdf884ba4eed1a0f8bc867aba"
	Jun 30 14:27:48 addons-301682 kubelet[1543]: E0630 14:27:48.696073    1543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloud-spanner-emulator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cloud-spanner-emulator pod=cloud-spanner-emulator-6d967984f9-l9lpc_default(bcd520ac-b89d-4aa8-80a3-08fcea21e742)\"" pod="default/cloud-spanner-emulator-6d967984f9-l9lpc" podUID="bcd520ac-b89d-4aa8-80a3-08fcea21e742"
	
	
	==> storage-provisioner [f3766ac202b8945f77b5d6ea4c3966d8cce41960afb6375598b7043ab6aff1e4] <==
	W0630 14:27:24.561454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:26.564500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:26.572831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:28.575706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:28.584130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:30.587091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:30.594461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:32.608121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:32.615810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:34.626432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:34.638276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:36.641972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:36.647805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:38.654782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:38.664730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:40.667165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:40.672260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:42.678400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:42.684922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:44.693620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:44.702781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:46.708860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:46.715076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:48.717810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:27:48.723119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301682 -n addons-301682
helpers_test.go:261: (dbg) Run:  kubectl --context addons-301682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn registry-creds-6b69cdcdd5-n9cld helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0 yakd-dashboard-575dd5996b-cwpg5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-301682 describe pod nginx test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn registry-creds-6b69cdcdd5-n9cld helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0 yakd-dashboard-575dd5996b-cwpg5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-301682 describe pod nginx test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn registry-creds-6b69cdcdd5-n9cld helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0 yakd-dashboard-575dd5996b-cwpg5: exit status 1 (82.560672ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301682/192.168.39.227
	Start Time:       Mon, 30 Jun 2025 14:25:41 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9gdz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9gdz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  2m9s                default-scheduler  Successfully assigned default/nginx to addons-301682
	  Warning  Failed     60s                 kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     60s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    59s                 kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     59s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    45s (x2 over 2m9s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6l844 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-6l844:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fnqjq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9xc5z" not found
	Error from server (NotFound): pods "registry-694bd45846-x8cnn" not found
	Error from server (NotFound): pods "registry-creds-6b69cdcdd5-n9cld" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0" not found
	Error from server (NotFound): pods "yakd-dashboard-575dd5996b-cwpg5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-301682 describe pod nginx test-local-path ingress-nginx-admission-create-fnqjq ingress-nginx-admission-patch-9xc5z registry-694bd45846-x8cnn registry-creds-6b69cdcdd5-n9cld helper-pod-create-pvc-e932c825-6abd-4a97-8888-bc44ed214cd0 yakd-dashboard-575dd5996b-cwpg5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 addons disable yakd --alsologtostderr -v=1: (2m3.26220798s)
--- FAIL: TestAddons/parallel/Yakd (246.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-920930 --alsologtostderr -v=1]
functional_test.go:935: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-920930 --alsologtostderr -v=1] ...
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-920930 --alsologtostderr -v=1] stdout:
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-920930 --alsologtostderr -v=1] stderr:
I0630 14:41:51.871883 1572394 out.go:345] Setting OutFile to fd 1 ...
I0630 14:41:51.872049 1572394 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:41:51.872061 1572394 out.go:358] Setting ErrFile to fd 2...
I0630 14:41:51.872068 1572394 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:41:51.872281 1572394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
I0630 14:41:51.872583 1572394 mustload.go:65] Loading cluster: functional-920930
I0630 14:41:51.872978 1572394 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:41:51.873388 1572394 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:41:51.873494 1572394 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:41:51.889879 1572394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
I0630 14:41:51.890493 1572394 main.go:141] libmachine: () Calling .GetVersion
I0630 14:41:51.891101 1572394 main.go:141] libmachine: Using API Version  1
I0630 14:41:51.891133 1572394 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:41:51.891599 1572394 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:41:51.891886 1572394 main.go:141] libmachine: (functional-920930) Calling .GetState
I0630 14:41:51.894033 1572394 host.go:66] Checking if "functional-920930" exists ...
I0630 14:41:51.894382 1572394 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:41:51.894450 1572394 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:41:51.911689 1572394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
I0630 14:41:51.912242 1572394 main.go:141] libmachine: () Calling .GetVersion
I0630 14:41:51.912805 1572394 main.go:141] libmachine: Using API Version  1
I0630 14:41:51.912830 1572394 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:41:51.913205 1572394 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:41:51.913473 1572394 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:41:51.913675 1572394 api_server.go:166] Checking apiserver status ...
I0630 14:41:51.913752 1572394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0630 14:41:51.913782 1572394 main.go:141] libmachine: (functional-920930) Calling .GetSSHHostname
I0630 14:41:51.917112 1572394 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:41:51.917540 1572394 main.go:141] libmachine: (functional-920930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:bf:47", ip: ""} in network mk-functional-920930: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:03 +0000 UTC Type:0 Mac:52:54:00:41:bf:47 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-920930 Clientid:01:52:54:00:41:bf:47}
I0630 14:41:51.917571 1572394 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined IP address 192.168.39.113 and MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:41:51.917794 1572394 main.go:141] libmachine: (functional-920930) Calling .GetSSHPort
I0630 14:41:51.917993 1572394 main.go:141] libmachine: (functional-920930) Calling .GetSSHKeyPath
I0630 14:41:51.918245 1572394 main.go:141] libmachine: (functional-920930) Calling .GetSSHUsername
I0630 14:41:51.918584 1572394 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/functional-920930/id_rsa Username:docker}
I0630 14:41:52.016152 1572394 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6077/cgroup
W0630 14:41:52.035574 1572394 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6077/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0630 14:41:52.035651 1572394 ssh_runner.go:195] Run: ls
I0630 14:41:52.040406 1572394 api_server.go:253] Checking apiserver healthz at https://192.168.39.113:8441/healthz ...
I0630 14:41:52.045539 1572394 api_server.go:279] https://192.168.39.113:8441/healthz returned 200:
ok
W0630 14:41:52.045601 1572394 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0630 14:41:52.045767 1572394 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:41:52.045782 1572394 addons.go:69] Setting dashboard=true in profile "functional-920930"
I0630 14:41:52.045789 1572394 addons.go:238] Setting addon dashboard=true in "functional-920930"
I0630 14:41:52.045814 1572394 host.go:66] Checking if "functional-920930" exists ...
I0630 14:41:52.046062 1572394 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:41:52.046106 1572394 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:41:52.062588 1572394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43595
I0630 14:41:52.063062 1572394 main.go:141] libmachine: () Calling .GetVersion
I0630 14:41:52.063631 1572394 main.go:141] libmachine: Using API Version  1
I0630 14:41:52.063660 1572394 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:41:52.064105 1572394 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:41:52.064696 1572394 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:41:52.064755 1572394 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:41:52.083124 1572394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
I0630 14:41:52.083671 1572394 main.go:141] libmachine: () Calling .GetVersion
I0630 14:41:52.084144 1572394 main.go:141] libmachine: Using API Version  1
I0630 14:41:52.084165 1572394 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:41:52.084596 1572394 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:41:52.084820 1572394 main.go:141] libmachine: (functional-920930) Calling .GetState
I0630 14:41:52.087480 1572394 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:41:52.089802 1572394 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0630 14:41:52.091703 1572394 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0630 14:41:52.093621 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0630 14:41:52.093645 1572394 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0630 14:41:52.093675 1572394 main.go:141] libmachine: (functional-920930) Calling .GetSSHHostname
I0630 14:41:52.096776 1572394 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:41:52.097205 1572394 main.go:141] libmachine: (functional-920930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:bf:47", ip: ""} in network mk-functional-920930: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:03 +0000 UTC Type:0 Mac:52:54:00:41:bf:47 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-920930 Clientid:01:52:54:00:41:bf:47}
I0630 14:41:52.097237 1572394 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined IP address 192.168.39.113 and MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:41:52.097445 1572394 main.go:141] libmachine: (functional-920930) Calling .GetSSHPort
I0630 14:41:52.097659 1572394 main.go:141] libmachine: (functional-920930) Calling .GetSSHKeyPath
I0630 14:41:52.097806 1572394 main.go:141] libmachine: (functional-920930) Calling .GetSSHUsername
I0630 14:41:52.097952 1572394 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/functional-920930/id_rsa Username:docker}
I0630 14:41:52.201760 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0630 14:41:52.201799 1572394 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0630 14:41:52.225475 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0630 14:41:52.225512 1572394 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0630 14:41:52.250711 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0630 14:41:52.250744 1572394 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0630 14:41:52.273202 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0630 14:41:52.273229 1572394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0630 14:41:52.297218 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0630 14:41:52.297264 1572394 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0630 14:41:52.320976 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0630 14:41:52.321006 1572394 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0630 14:41:52.341616 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0630 14:41:52.341650 1572394 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0630 14:41:52.362992 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0630 14:41:52.363019 1572394 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0630 14:41:52.387085 1572394 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0630 14:41:52.387114 1572394 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0630 14:41:52.411617 1572394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0630 14:41:53.344274 1572394 main.go:141] libmachine: Making call to close driver server
I0630 14:41:53.344303 1572394 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:41:53.344720 1572394 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:41:53.344743 1572394 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:41:53.344753 1572394 main.go:141] libmachine: Making call to close driver server
I0630 14:41:53.344864 1572394 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:41:53.345172 1572394 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:41:53.345191 1572394 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:41:53.347393 1572394 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-920930 addons enable metrics-server

                                                
                                                
I0630 14:41:53.349132 1572394 addons.go:201] Writing out "functional-920930" config to set dashboard=true...
W0630 14:41:53.349506 1572394 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0630 14:41:53.350549 1572394 kapi.go:59] client config for functional-920930: &rest.Config{Host:"https://192.168.39.113:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt", KeyFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.key", CAFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x258ff00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0630 14:41:53.351193 1572394 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0630 14:41:53.351229 1572394 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0630 14:41:53.351238 1572394 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0630 14:41:53.351244 1572394 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0630 14:41:53.351249 1572394 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0630 14:41:53.363941 1572394 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  2153a6b0-14c8-4651-9858-1c0bd2029b77 858 0 2025-06-30 14:41:53 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-06-30 14:41:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.108.241.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.108.241.244],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0630 14:41:53.364128 1572394 out.go:270] * Launching proxy ...
* Launching proxy ...
I0630 14:41:53.364209 1572394 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-920930 proxy --port 36195]
I0630 14:41:53.364493 1572394 dashboard.go:157] Waiting for kubectl to output host:port ...
I0630 14:41:53.410360 1572394 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0630 14:41:53.410417 1572394 out.go:270] * Verifying proxy health ...
* Verifying proxy health ...
I0630 14:41:53.425752 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[73b0b718-59d3-4c8e-84d9-461f3226e607] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc00068d200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083fa40 TLS:<nil>}
I0630 14:41:53.425852 1572394 retry.go:31] will retry after 76.13µs: Temporary Error: unexpected response code: 503
I0630 14:41:53.436956 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b902b840-b37e-4514-bf3b-126a25a7796e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc0008a7c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057e140 TLS:<nil>}
I0630 14:41:53.437025 1572394 retry.go:31] will retry after 204.898µs: Temporary Error: unexpected response code: 503
I0630 14:41:53.444803 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba5bba7e-2b38-48e6-844c-c1395fa67605] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc00068d3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001d68c0 TLS:<nil>}
I0630 14:41:53.444902 1572394 retry.go:31] will retry after 162.71µs: Temporary Error: unexpected response code: 503
I0630 14:41:53.452659 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04cb322f-6f2f-4513-87de-ea6b52086ba5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc000d9ee80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057e280 TLS:<nil>}
I0630 14:41:53.452764 1572394 retry.go:31] will retry after 235.678µs: Temporary Error: unexpected response code: 503
I0630 14:41:53.470579 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[df094c33-86e2-4f66-9a6c-33a01bf7d938] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc00068d600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083fb80 TLS:<nil>}
I0630 14:41:53.470681 1572394 retry.go:31] will retry after 697.379µs: Temporary Error: unexpected response code: 503
I0630 14:41:53.489469 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8ae1625b-a8e7-4ff1-901b-5f2f9c5beede] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc000d9ef80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057e3c0 TLS:<nil>}
I0630 14:41:53.489579 1572394 retry.go:31] will retry after 1.065059ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.510248 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ba8c386-e01b-492e-a2f2-36ce4ac53729] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc00068d740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00067b040 TLS:<nil>}
I0630 14:41:53.510322 1572394 retry.go:31] will retry after 790.507µs: Temporary Error: unexpected response code: 503
I0630 14:41:53.520937 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[31c7443d-fad2-4966-882d-f6f45887f47e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc000d9f080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057e500 TLS:<nil>}
I0630 14:41:53.521065 1572394 retry.go:31] will retry after 2.280478ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.528500 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2087540e-2cb6-4878-a794-4ab6b2df2723] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc00068d840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00067b400 TLS:<nil>}
I0630 14:41:53.528574 1572394 retry.go:31] will retry after 2.756356ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.536656 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e661c160-656b-45ef-bb92-af502f6439ed] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc0009620c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057e640 TLS:<nil>}
I0630 14:41:53.536738 1572394 retry.go:31] will retry after 2.489633ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.543461 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c652f154-2066-49cd-83d2-6518db4215d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc00068d940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001d6a00 TLS:<nil>}
I0630 14:41:53.543562 1572394 retry.go:31] will retry after 6.293147ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.554583 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c27455a0-4aa7-4328-a328-293af6e55473] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc000d9f180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057e780 TLS:<nil>}
I0630 14:41:53.554679 1572394 retry.go:31] will retry after 9.962636ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.568821 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8cb159a2-e8e2-4399-9e1c-0e2ab79ff7aa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc0009621c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00067b540 TLS:<nil>}
I0630 14:41:53.568908 1572394 retry.go:31] will retry after 14.218885ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.590770 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[573f9714-7580-41cf-94e0-7fe167e8d3c1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc000d9f240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001d6b40 TLS:<nil>}
I0630 14:41:53.590938 1572394 retry.go:31] will retry after 23.075464ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.618478 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bda7bed3-0db8-4d00-b372-fc5e5cf2c148] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc00068da80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00067b900 TLS:<nil>}
I0630 14:41:53.618547 1572394 retry.go:31] will retry after 37.960849ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.661077 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[47e36276-2426-4732-80b4-98418f751256] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc0009622c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057eb40 TLS:<nil>}
I0630 14:41:53.661163 1572394 retry.go:31] will retry after 42.424453ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.709583 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[558cd84c-d1ce-4b5f-9a89-120cfe26ac22] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc00068db80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001d6c80 TLS:<nil>}
I0630 14:41:53.709677 1572394 retry.go:31] will retry after 48.043564ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.761032 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b26acb9b-b1a7-483c-8ae4-2d1b164b9bfb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc000d9f340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057edc0 TLS:<nil>}
I0630 14:41:53.761140 1572394 retry.go:31] will retry after 60.027523ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.824303 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b9085039-3cef-46ed-bba8-7f93ffc9cccf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc000d9f440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00067ba40 TLS:<nil>}
I0630 14:41:53.824390 1572394 retry.go:31] will retry after 111.676154ms: Temporary Error: unexpected response code: 503
I0630 14:41:53.940036 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e64ac8e8-4405-4706-98eb-ccc804780d28] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:53 GMT]] Body:0xc00068dc80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00067bb80 TLS:<nil>}
I0630 14:41:53.940118 1572394 retry.go:31] will retry after 157.525582ms: Temporary Error: unexpected response code: 503
I0630 14:41:54.102069 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[421ce520-7a30-4be5-aa17-773b60e8d78d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:54 GMT]] Body:0xc000d9f540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057f680 TLS:<nil>}
I0630 14:41:54.102159 1572394 retry.go:31] will retry after 265.931678ms: Temporary Error: unexpected response code: 503
I0630 14:41:54.372187 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b35986ff-18c3-46b6-b544-89ef5afb727e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:54 GMT]] Body:0xc000962400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00067bcc0 TLS:<nil>}
I0630 14:41:54.372260 1572394 retry.go:31] will retry after 542.424639ms: Temporary Error: unexpected response code: 503
I0630 14:41:54.918401 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[82f86ce2-561e-49da-bf9c-feb04664677b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:54 GMT]] Body:0xc00068dd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001d6dc0 TLS:<nil>}
I0630 14:41:54.918482 1572394 retry.go:31] will retry after 899.580226ms: Temporary Error: unexpected response code: 503
I0630 14:41:55.821494 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c1a38587-b60c-4126-a570-82ac941d684a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:55 GMT]] Body:0xc000d9f600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057f7c0 TLS:<nil>}
I0630 14:41:55.821556 1572394 retry.go:31] will retry after 1.444391152s: Temporary Error: unexpected response code: 503
I0630 14:41:57.269659 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0424d42e-1541-484f-b411-f2c3283e4d54] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:57 GMT]] Body:0xc000d9f680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057f900 TLS:<nil>}
I0630 14:41:57.269750 1572394 retry.go:31] will retry after 2.340966663s: Temporary Error: unexpected response code: 503
I0630 14:41:59.614495 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[12b3beea-9c86-44dc-bd9a-e874f5a07907] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:41:59 GMT]] Body:0xc00068df40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00067be00 TLS:<nil>}
I0630 14:41:59.614571 1572394 retry.go:31] will retry after 2.357540965s: Temporary Error: unexpected response code: 503
I0630 14:42:01.976124 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9fe3996a-479c-4010-a346-1a98b9c7c75a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:42:01 GMT]] Body:0xc000962500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057fa40 TLS:<nil>}
I0630 14:42:01.976198 1572394 retry.go:31] will retry after 3.701042169s: Temporary Error: unexpected response code: 503
I0630 14:42:05.682832 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b29f6918-c82b-4124-8515-95efe61a6965] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:42:05 GMT]] Body:0xc000d9f740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057fb80 TLS:<nil>}
I0630 14:42:05.682911 1572394 retry.go:31] will retry after 5.172974972s: Temporary Error: unexpected response code: 503
I0630 14:42:10.862412 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[045653c5-610f-44c3-b937-118af5184812] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:42:10 GMT]] Body:0xc000d9f7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001d6f00 TLS:<nil>}
I0630 14:42:10.862487 1572394 retry.go:31] will retry after 5.75559902s: Temporary Error: unexpected response code: 503
I0630 14:42:16.624629 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[066bd46d-c99f-44eb-b919-63e727dfeaf9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:42:16 GMT]] Body:0xc0008d8940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000998500 TLS:<nil>}
I0630 14:42:16.624704 1572394 retry.go:31] will retry after 8.829191432s: Temporary Error: unexpected response code: 503
I0630 14:42:25.458652 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[58b3ba24-687b-4539-8c59-6b844c58b1dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:42:25 GMT]] Body:0xc000962640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057fcc0 TLS:<nil>}
I0630 14:42:25.458750 1572394 retry.go:31] will retry after 26.882267378s: Temporary Error: unexpected response code: 503
I0630 14:42:52.344802 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e909047a-8c8c-4b6b-8a77-5ab875b52fb1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:42:52 GMT]] Body:0xc0008d8a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001d7040 TLS:<nil>}
I0630 14:42:52.344903 1572394 retry.go:31] will retry after 30.196691256s: Temporary Error: unexpected response code: 503
I0630 14:43:22.545473 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[965b57df-bcd5-473b-9481-6c77b26422a6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:43:22 GMT]] Body:0xc0008d8bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00057fe00 TLS:<nil>}
I0630 14:43:22.545582 1572394 retry.go:31] will retry after 44.653916097s: Temporary Error: unexpected response code: 503
I0630 14:44:07.203051 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7e1cb019-ad21-4ade-b486-00623c985a10] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:44:07 GMT]] Body:0xc0008d85c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043a000 TLS:<nil>}
I0630 14:44:07.203144 1572394 retry.go:31] will retry after 1m15.557632858s: Temporary Error: unexpected response code: 503
I0630 14:45:22.764745 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[33b6686a-2681-479b-97ab-1292fbe8b3cd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:45:22 GMT]] Body:0xc00063e740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000998640 TLS:<nil>}
I0630 14:45:22.764836 1572394 retry.go:31] will retry after 1m5.64496708s: Temporary Error: unexpected response code: 503
I0630 14:46:28.414340 1572394 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce4c242a-ef34-4cfc-baea-846e65b373a7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:46:28 GMT]] Body:0xc000d9e080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043a140 TLS:<nil>}
I0630 14:46:28.414451 1572394 retry.go:31] will retry after 1m27.431753668s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-920930 -n functional-920930
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 logs -n 25: (1.486637545s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | --dry-run --alsologtostderr                                          |                   |         |         |                     |                     |
	|                | -v=1 --driver=kvm2                                                   |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                             |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                   | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -p functional-920930                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh mount |                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | grep 9p; ls -la /mount-9p; cat                                       |                   |         |         |                     |                     |
	|                | /mount-9p/pod-dates                                                  |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh sudo                                           | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | umount -f /mount-9p                                                  |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | -T /mount1                                                           |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | -T /mount1                                                           |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | -T /mount2                                                           |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | -T /mount3                                                           |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | --kill=true                                                          |                   |         |         |                     |                     |
	| update-context | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format short                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format yaml                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh pgrep                                          | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | buildkitd                                                            |                   |         |         |                     |                     |
	| image          | functional-920930 image build -t                                     | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | localhost/my-image:functional-920930                                 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                     |                   |         |         |                     |                     |
	| image          | functional-920930 image ls                                           | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format json                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format table                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:41:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:41:51.713534 1572366 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:41:51.714254 1572366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:51.714303 1572366 out.go:358] Setting ErrFile to fd 2...
	I0630 14:41:51.714320 1572366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:51.714799 1572366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:41:51.715832 1572366 out.go:352] Setting JSON to false
	I0630 14:41:51.716949 1572366 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":30204,"bootTime":1751264308,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:41:51.717065 1572366 start.go:140] virtualization: kvm guest
	I0630 14:41:51.719204 1572366 out.go:177] * [functional-920930] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:41:51.720785 1572366 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:41:51.720788 1572366 notify.go:220] Checking for updates...
	I0630 14:41:51.723515 1572366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:41:51.725273 1572366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:41:51.727001 1572366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:41:51.728360 1572366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:41:51.729562 1572366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:41:51.731792 1572366 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:41:51.732342 1572366 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:51.732423 1572366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:51.749833 1572366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0630 14:41:51.750368 1572366 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:51.751029 1572366 main.go:141] libmachine: Using API Version  1
	I0630 14:41:51.751100 1572366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:51.751848 1572366 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:51.752116 1572366 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:51.752417 1572366 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:41:51.752842 1572366 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:51.752891 1572366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:51.770672 1572366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0630 14:41:51.771128 1572366 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:51.771736 1572366 main.go:141] libmachine: Using API Version  1
	I0630 14:41:51.771758 1572366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:51.772126 1572366 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:51.772410 1572366 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:51.813022 1572366 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 14:41:51.814347 1572366 start.go:304] selected driver: kvm2
	I0630 14:41:51.814364 1572366 start.go:908] validating driver "kvm2" against &{Name:functional-920930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:functional-920930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:41:51.814524 1572366 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:41:51.815661 1572366 cni.go:84] Creating CNI manager for ""
	I0630 14:41:51.815724 1572366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:41:51.815791 1572366 start.go:347] cluster config:
	{Name:functional-920930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:functional-920930 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:41:51.817459 1572366 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.715264742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294812715240120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cc255f7-382e-4863-84e1-cd003a3e6258 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.716157879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0457b3cb-9c5c-4437-b62b-c4f8f5d1615a name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.716224142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0457b3cb-9c5c-4437-b62b-c4f8f5d1615a name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.716532649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0457b3cb-9c5c-4437-b62b-c4f8f5d1615a name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.762585686Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd4cde89-2789-4603-ae78-1fd4c61ec2a0 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.763430323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd4cde89-2789-4603-ae78-1fd4c61ec2a0 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.765132255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6115dddb-cdc5-4c8b-b588-4e43d9bfe60c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.766322197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294812766297679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6115dddb-cdc5-4c8b-b588-4e43d9bfe60c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.767158650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9924abe5-cff9-429b-a15e-5cffbaa8fc80 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.767211912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9924abe5-cff9-429b-a15e-5cffbaa8fc80 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.767502087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9924abe5-cff9-429b-a15e-5cffbaa8fc80 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.806037685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24ce0893-e0be-4b09-a0dd-3e4bfc027a2b name=/runtime.v1.RuntimeService/Version
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.806209346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24ce0893-e0be-4b09-a0dd-3e4bfc027a2b name=/runtime.v1.RuntimeService/Version
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.807419310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfe2c4a6-190b-49a1-ba7a-f899dd1206b6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.808197709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294812808174559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfe2c4a6-190b-49a1-ba7a-f899dd1206b6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.808667002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f098af32-a997-4abe-933a-76e4777bb403 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.808731999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f098af32-a997-4abe-933a-76e4777bb403 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.809071918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f098af32-a997-4abe-933a-76e4777bb403 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.850984452Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9493728-d460-416c-a284-13ea1d222fac name=/runtime.v1.RuntimeService/Version
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.851071233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9493728-d460-416c-a284-13ea1d222fac name=/runtime.v1.RuntimeService/Version
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.852216267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0884c69-ba88-4707-b7d2-dbb7d9f1fb7b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.852904325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294812852878739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0884c69-ba88-4707-b7d2-dbb7d9f1fb7b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.853468570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa830459-77b6-40c3-8569-d3f6c0489238 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.853542397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa830459-77b6-40c3-8569-d3f6c0489238 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:46:52 functional-920930 crio[5068]: time="2025-06-30 14:46:52.853814546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa830459-77b6-40c3-8569-d3f6c0489238 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e9b9ea50e8845       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    5 minutes ago       Running             echoserver                0                   8940a57dc83b8       hello-node-fcfd88b6f-ggs67
	ae1b257871468       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   9a797c9b399f0       busybox-mount
	686042f565e61       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    5 minutes ago       Running             echoserver                0                   7490f84c20f02       hello-node-connect-58f9cf68d8-2fgsq
	90862d77cc8b6       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      6 minutes ago       Running             coredns                   2                   7c722315f961d       coredns-674b8bbfcf-dwpq5
	04e2e562d14a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       2                   6d61208c1e53b       storage-provisioner
	f7dab1382edaf       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                      6 minutes ago       Running             kube-proxy                2                   3658db7a8962b       kube-proxy-6gkck
	74b099ec305ed       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e                                      6 minutes ago       Running             kube-apiserver            0                   309b89afbd585       kube-apiserver-functional-920930
	1cee72c70835f       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                      6 minutes ago       Running             kube-controller-manager   2                   c19148f62d2ab       kube-controller-manager-functional-920930
	defc9e44613eb       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                      6 minutes ago       Running             kube-scheduler            3                   beb4e35bc8e74       kube-scheduler-functional-920930
	ad4580dc5ad92       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      6 minutes ago       Running             etcd                      2                   bffb72838797f       etcd-functional-920930
	eb52567ddb962       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                      6 minutes ago       Exited              kube-scheduler            2                   beb4e35bc8e74       kube-scheduler-functional-920930
	2de81fa95be3a       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      6 minutes ago       Exited              coredns                   1                   f41b1eeade261       coredns-674b8bbfcf-dwpq5
	628c9b71ec42e       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                      6 minutes ago       Exited              kube-proxy                1                   a32019f55f5cf       kube-proxy-6gkck
	e904ed633b9af       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                      7 minutes ago       Exited              kube-controller-manager   1                   4c60d47a534d5       kube-controller-manager-functional-920930
	fecd739e675eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       1                   b150df55bd94f       storage-provisioner
	cd18d42edb6d1       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      7 minutes ago       Exited              etcd                      1                   7942bf3b782e4       etcd-functional-920930
	
	
	==> coredns [2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:43161 - 51463 "HINFO IN 5671052926409919755.872030747434307965. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.049518594s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:57131 - 26142 "HINFO IN 5030995905101555605.8240098368165366984. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034335283s
	
	
	==> describe nodes <==
	Name:               functional-920930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-920930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=functional-920930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_39_30_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:39:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-920930
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:46:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:42:42 +0000   Mon, 30 Jun 2025 14:39:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:42:42 +0000   Mon, 30 Jun 2025 14:39:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:42:42 +0000   Mon, 30 Jun 2025 14:39:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:42:42 +0000   Mon, 30 Jun 2025 14:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    functional-920930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011052Ki
	  pods:               110
	System Info:
	  Machine ID:                 091fc3b53bfe4228b4114aaf6e11ec06
	  System UUID:                091fc3b5-3bfe-4228-b411-4aaf6e11ec06
	  Boot ID:                    3d819405-1145-4abc-bc97-f0a5952e9ba8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-2fgsq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  default                     hello-node-fcfd88b6f-ggs67                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  default                     mysql-58ccfd96bb-2hbbf                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m49s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 coredns-674b8bbfcf-dwpq5                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m19s
	  kube-system                 etcd-functional-920930                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m24s
	  kube-system                 kube-apiserver-functional-920930              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-controller-manager-functional-920930     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 kube-proxy-6gkck                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-scheduler-functional-920930              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-rf2w7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-kk2nt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m17s                  kube-proxy       
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  Starting                 6m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m31s (x8 over 7m31s)  kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s (x8 over 7m31s)  kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s (x7 over 7m31s)  kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m24s                  kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m24s                  kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m24s                  kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m23s                  kubelet          Node functional-920930 status is now: NodeReady
	  Normal  RegisteredNode           7m20s                  node-controller  Node functional-920930 event: Registered Node functional-920930 in Controller
	  Normal  Starting                 7m                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m (x8 over 7m)        kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m (x8 over 7m)        kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m (x7 over 7m)        kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m54s                  node-controller  Node functional-920930 event: Registered Node functional-920930 in Controller
	  Normal  NodeHasNoDiskPressure    6m17s (x8 over 6m17s)  kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m17s (x7 over 6m17s)  kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m10s                  node-controller  Node functional-920930 event: Registered Node functional-920930 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.000040] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000295] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Jun30 14:39] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.092998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.103461] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.165029] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.732010] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.128604] kauditd_printk_skb: 81 callbacks suppressed
	[  +4.661998] kauditd_printk_skb: 173 callbacks suppressed
	[Jun30 14:40] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.124008] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.123477] kauditd_printk_skb: 138 callbacks suppressed
	[  +0.066516] kauditd_printk_skb: 63 callbacks suppressed
	[  +7.874879] kauditd_printk_skb: 17 callbacks suppressed
	[Jun30 14:41] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.217146] kauditd_printk_skb: 34 callbacks suppressed
	[ +24.853412] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.480845] 9pnet: p9_fd_create_tcp (8344): problem connecting socket to 192.168.39.1
	[  +5.945689] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:42] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028] <==
	{"level":"info","ts":"2025-06-30T14:40:37.157909Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3af003d6f0036250","local-member-id":"8069059f79d446ff","added-peer-id":"8069059f79d446ff","added-peer-peer-urls":["https://192.168.39.113:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-06-30T14:40:37.158117Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"3af003d6f0036250","local-member-id":"8069059f79d446ff","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T14:40:37.158182Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T14:40:37.165503Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-06-30T14:40:37.165827Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"8069059f79d446ff","initial-advertise-peer-urls":["https://192.168.39.113:2380"],"listen-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.113:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-06-30T14:40:37.165872Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-06-30T14:40:37.165983Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:37.166012Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:39.013343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff is starting a new election at term 3"}
	{"level":"info","ts":"2025-06-30T14:40:39.013404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became pre-candidate at term 3"}
	{"level":"info","ts":"2025-06-30T14:40:39.013440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgPreVoteResp from 8069059f79d446ff at term 3"}
	{"level":"info","ts":"2025-06-30T14:40:39.013454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became candidate at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.013495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgVoteResp from 8069059f79d446ff at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.013505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became leader at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.013512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8069059f79d446ff elected leader 8069059f79d446ff at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.016364Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"8069059f79d446ff","local-member-attributes":"{Name:functional-920930 ClientURLs:[https://192.168.39.113:2379]}","request-path":"/0/members/8069059f79d446ff/attributes","cluster-id":"3af003d6f0036250","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T14:40:39.016549Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:40:39.016883Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:40:39.017420Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:40:39.017558Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T14:40:39.017607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T14:40:39.018102Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T14:40:39.018553Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:40:39.019210Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.113:2379"}
	{"level":"info","ts":"2025-06-30T14:41:44.641422Z","caller":"traceutil/trace.go:171","msg":"trace[2037699269] transaction","detail":"{read_only:false; response_revision:793; number_of_response:1; }","duration":"228.150284ms","start":"2025-06-30T14:41:44.413242Z","end":"2025-06-30T14:41:44.641393Z","steps":["trace[2037699269] 'process raft request'  (duration: 228.031768ms)"],"step_count":1}
	
	
	==> etcd [cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188] <==
	{"level":"info","ts":"2025-06-30T14:39:55.114512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became pre-candidate at term 2"}
	{"level":"info","ts":"2025-06-30T14:39:55.114569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgPreVoteResp from 8069059f79d446ff at term 2"}
	{"level":"info","ts":"2025-06-30T14:39:55.114596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became candidate at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.114658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgVoteResp from 8069059f79d446ff at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.114680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became leader at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.114700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8069059f79d446ff elected leader 8069059f79d446ff at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.120330Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:39:55.121082Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:39:55.121635Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.113:2379"}
	{"level":"info","ts":"2025-06-30T14:39:55.121877Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:39:55.122364Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:39:55.122875Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T14:39:55.120294Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"8069059f79d446ff","local-member-attributes":"{Name:functional-920930 ClientURLs:[https://192.168.39.113:2379]}","request-path":"/0/members/8069059f79d446ff/attributes","cluster-id":"3af003d6f0036250","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T14:39:55.131019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T14:39:55.131066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T14:40:23.306997Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-06-30T14:40:23.308035Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-920930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	{"level":"warn","ts":"2025-06-30T14:40:23.382168Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:40:23.382224Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:40:23.382275Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:40:23.382282Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"info","ts":"2025-06-30T14:40:23.382330Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8069059f79d446ff","current-leader-member-id":"8069059f79d446ff"}
	{"level":"info","ts":"2025-06-30T14:40:23.389987Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:23.390088Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:23.390098Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-920930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	
	
	==> kernel <==
	 14:46:53 up 8 min,  0 users,  load average: 1.16, 0.55, 0.29
	Linux functional-920930 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e] <==
	I0630 14:40:40.286148       1 cache.go:39] Caches are synced for autoregister controller
	I0630 14:40:40.319968       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0630 14:40:40.426660       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0630 14:40:41.154736       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0630 14:40:41.971098       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0630 14:40:42.010584       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0630 14:40:42.039456       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0630 14:40:42.047332       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0630 14:40:43.639026       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:40:43.874665       1 controller.go:667] quota admission added evaluator for: endpoints
	I0630 14:40:43.924528       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0630 14:40:43.976253       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0630 14:40:59.043810       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:40:59.044205       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.17.70"}
	I0630 14:41:02.488213       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:04.616858       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.109.26"}
	I0630 14:41:04.633281       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:06.176890       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:06.184813       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.205.82"}
	I0630 14:41:12.664714       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:12.665154       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.25.211"}
	I0630 14:41:52.950997       1 controller.go:667] quota admission added evaluator for: namespaces
	I0630 14:41:53.284992       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.241.244"}
	I0630 14:41:53.292370       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:53.334412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.40.171"}
	
	
	==> kube-controller-manager [1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859] <==
	I0630 14:40:43.553714       1 shared_informer.go:357] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0630 14:40:43.577042       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 14:40:43.578385       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0630 14:40:43.581096       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0630 14:40:43.583014       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 14:40:43.621033       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0630 14:40:43.797594       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0630 14:40:43.820318       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:43.820505       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0630 14:40:43.821742       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0630 14:40:43.824128       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 14:40:43.833393       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:43.872030       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0630 14:40:44.269400       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:40:44.293554       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:40:44.293586       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:40:44.293594       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0630 14:41:53.060063       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.076088       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.076352       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.091858       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.102085       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.102200       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.109054       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.113648       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82] <==
	I0630 14:39:59.918115       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0630 14:39:59.918178       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0630 14:39:59.918629       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0630 14:39:59.920103       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0630 14:39:59.920307       1 shared_informer.go:357] "Caches are synced" controller="TTL"
	I0630 14:39:59.925191       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0630 14:39:59.925322       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0630 14:39:59.928206       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0630 14:39:59.943007       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0630 14:39:59.943069       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0630 14:39:59.943157       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0630 14:39:59.943279       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-920930"
	I0630 14:39:59.943339       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0630 14:39:59.967874       1 shared_informer.go:357] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0630 14:40:00.019005       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 14:40:00.028389       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0630 14:40:00.067093       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0630 14:40:00.117961       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0630 14:40:00.129551       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:00.137073       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:00.267074       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0630 14:40:00.638352       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:40:00.638382       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:40:00.638389       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0630 14:40:00.648124       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc] <==
	E0630 14:39:58.003479       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:39:58.013306       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	E0630 14:39:58.013594       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:39:58.047614       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:39:58.047642       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:39:58.047662       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:39:58.057350       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:39:58.057659       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:39:58.057710       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:39:58.062763       1 config.go:199] "Starting service config controller"
	I0630 14:39:58.062781       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:39:58.062803       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:39:58.062806       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:39:58.062816       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:39:58.062819       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:39:58.063534       1 config.go:329] "Starting node config controller"
	I0630 14:39:58.064061       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:39:58.163472       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:39:58.163563       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:39:58.163537       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:39:58.164141       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9] <==
	E0630 14:40:40.828155       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:40:40.840141       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	E0630 14:40:40.840317       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:40:40.875016       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:40:40.875078       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:40:40.875101       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:40:40.883154       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:40:40.883493       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:40:40.883534       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:40:40.887292       1 config.go:199] "Starting service config controller"
	I0630 14:40:40.887326       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:40:40.887340       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:40:40.887354       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:40:40.887367       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:40:40.887370       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:40:40.887385       1 config.go:329] "Starting node config controller"
	I0630 14:40:40.887402       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:40:40.987555       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0630 14:40:40.987660       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:40:40.987702       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:40:40.988035       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867] <==
	I0630 14:40:38.041658       1 serving.go:386] Generated self-signed cert in-memory
	W0630 14:40:40.219828       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0630 14:40:40.219905       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0630 14:40:40.219916       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 14:40:40.219922       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 14:40:40.265788       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 14:40:40.269342       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:40:40.272913       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 14:40:40.274764       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:40.274828       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:40.274857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0630 14:40:40.375725       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d] <==
	I0630 14:40:33.215007       1 serving.go:386] Generated self-signed cert in-memory
	W0630 14:40:34.261377       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.113:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.113:8441: connect: connection refused
	W0630 14:40:34.261420       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 14:40:34.261429       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 14:40:34.268807       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 14:40:34.268856       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0630 14:40:34.268874       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0630 14:40:34.270662       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:34.270759       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0630 14:40:34.270806       1 shared_informer.go:353] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:34.271094       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 14:40:34.271240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0630 14:40:34.271294       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0630 14:40:34.271469       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	E0630 14:40:34.271630       1 server.go:271] "handlers are not fully synchronized" err="context canceled"
	E0630 14:40:34.271706       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 30 14:45:59 functional-920930 kubelet[5865]: E0630 14:45:59.618082    5865 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Jun 30 14:45:59 functional-920930 kubelet[5865]: E0630 14:45:59.618422    5865 kuberuntime_image.go:42] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Jun 30 14:45:59 functional-920930 kubelet[5865]: E0630 14:45:59.621018    5865 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmstk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTP
Get:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kubernetes-dashboard-7779f9b69b-kk2nt_kubernetes-dashboard(d2f64fa9-28ce-4baf-9b83-5e54c01e3a90): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24
f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:45:59 functional-920930 kubelet[5865]: E0630 14:45:59.622816    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-kk2nt" podUID="d2f64fa9-28ce-4baf-9b83-5e54c01e3a90"
	Jun 30 14:46:06 functional-920930 kubelet[5865]: E0630 14:46:06.453069    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294766452589471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:46:06 functional-920930 kubelet[5865]: E0630 14:46:06.453508    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294766452589471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:46:11 functional-920930 kubelet[5865]: E0630 14:46:11.237438    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-kk2nt" podUID="d2f64fa9-28ce-4baf-9b83-5e54c01e3a90"
	Jun 30 14:46:16 functional-920930 kubelet[5865]: E0630 14:46:16.456519    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294776456196694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:46:16 functional-920930 kubelet[5865]: E0630 14:46:16.456586    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294776456196694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:46:26 functional-920930 kubelet[5865]: E0630 14:46:26.459256    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294786458792073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:46:26 functional-920930 kubelet[5865]: E0630 14:46:26.459291    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294786458792073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:46:31 functional-920930 kubelet[5865]: E0630 14:46:31.266367    5865 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:46:31 functional-920930 kubelet[5865]: E0630 14:46:31.266448    5865 kuberuntime_image.go:42] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:46:31 functional-920930 kubelet[5865]: E0630 14:46:31.266749    5865 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96kw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:46:31 functional-920930 kubelet[5865]: E0630 14:46:31.268078    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe"
	Jun 30 14:46:36 functional-920930 kubelet[5865]: E0630 14:46:36.335799    5865 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode07868ddea98fad79007acfd248a84f0/crio-4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd: Error finding container 4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd: Status 404 returned error can't find the container with id 4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd
	Jun 30 14:46:36 functional-920930 kubelet[5865]: E0630 14:46:36.336439    5865 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod5a607c43-dc74-4d3a-bac3-df6dd6d94ca2/crio-b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635: Error finding container b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635: Status 404 returned error can't find the container with id b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635
	Jun 30 14:46:36 functional-920930 kubelet[5865]: E0630 14:46:36.336697    5865 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod20cc71ef-99c9-4845-84c7-8abdcdf41a81/crio-a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707: Error finding container a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707: Status 404 returned error can't find the container with id a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707
	Jun 30 14:46:36 functional-920930 kubelet[5865]: E0630 14:46:36.336998    5865 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode95e631d-9954-4cfb-b9d4-e3f07d238272/crio-f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052: Error finding container f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052: Status 404 returned error can't find the container with id f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052
	Jun 30 14:46:36 functional-920930 kubelet[5865]: E0630 14:46:36.337248    5865 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2385b9fe58d0fedf6afdb66c5a0f0007/crio-7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109: Error finding container 7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109: Status 404 returned error can't find the container with id 7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109
	Jun 30 14:46:36 functional-920930 kubelet[5865]: E0630 14:46:36.461417    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294796460655829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:46:36 functional-920930 kubelet[5865]: E0630 14:46:36.461516    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294796460655829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:46:45 functional-920930 kubelet[5865]: E0630 14:46:45.236797    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe"
	Jun 30 14:46:46 functional-920930 kubelet[5865]: E0630 14:46:46.464892    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294806464201226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:46:46 functional-920930 kubelet[5865]: E0630 14:46:46.465131    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294806464201226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1] <==
	W0630 14:46:28.110085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:30.113026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:30.119331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:32.122865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:32.128040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:34.131060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:34.136489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:36.139295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:36.149019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:38.151783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:38.159635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:40.163289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:40.168136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:42.171682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:42.177146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:44.180734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:44.189559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:46.192675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:46.197281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:48.200482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:48.205039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:50.209288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:50.215390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:52.218382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:46:52.226732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7] <==
	I0630 14:39:57.695452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0630 14:39:57.765283       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0630 14:39:57.765331       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0630 14:39:57.789184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:01.252742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:05.519143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:09.118547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:12.172013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:15.194883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:15.201225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0630 14:40:15.201372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0630 14:40:15.201895       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df68d1a8-42ba-415d-b59a-ce223f2e6b54", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-920930_ecb68940-66a7-47a0-9ca5-e5807b7886d2 became leader
	I0630 14:40:15.202295       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-920930_ecb68940-66a7-47a0-9ca5-e5807b7886d2!
	W0630 14:40:15.205119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:15.211461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0630 14:40:15.303387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-920930_ecb68940-66a7-47a0-9ca5-e5807b7886d2!
	W0630 14:40:17.215167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:17.224366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:19.227471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:19.237822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:21.240881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:21.246290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:23.249317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:23.256612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-920930 -n functional-920930
helpers_test.go:261: (dbg) Run:  kubectl --context functional-920930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-920930 describe pod busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-920930 describe pod busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt: exit status 1 (98.487563ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-920930/192.168.39.113
	Start Time:       Mon, 30 Jun 2025 14:41:07 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 30 Jun 2025 14:41:43 +0000
	      Finished:     Mon, 30 Jun 2025 14:41:43 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m858h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-m858h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m47s  default-scheduler  Successfully assigned default/busybox-mount to functional-920930
	  Normal  Pulling    5m47s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.354s (35.793s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m11s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-2hbbf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-920930/192.168.39.113
	Start Time:       Mon, 30 Jun 2025 14:41:04 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86hsc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-86hsc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m49s                 default-scheduler  Successfully assigned default/mysql-58ccfd96bb-2hbbf to functional-920930
	  Warning  Failed     5m18s                 kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     118s (x3 over 5m18s)  kubelet            Error: ErrImagePull
	  Warning  Failed     118s (x2 over 4m20s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    91s (x4 over 5m17s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     91s (x4 over 5m17s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    76s (x4 over 5m49s)   kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-920930/192.168.39.113
	Start Time:       Mon, 30 Jun 2025 14:41:55 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96kw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-96kw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  4m58s                  default-scheduler  Successfully assigned default/sp-pod to functional-920930
	  Normal   Pulling    2m32s (x2 over 4m58s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     23s (x2 over 2m45s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     23s (x2 over 2m45s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x2 over 2m44s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x2 over 2m44s)     kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-rf2w7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-kk2nt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-920930 describe pod busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt: exit status 1
E0630 14:50:20.917391 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.36s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5a607c43-dc74-4d3a-bac3-df6dd6d94ca2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004248749s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-920930 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-920930 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-920930 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-920930 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe] Pending
helpers_test.go:344: "sp-pod" [fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-920930 -n functional-920930
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-06-30 14:44:56.247833195 +0000 UTC m=+1639.335713999
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-920930 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-920930 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-920930/192.168.39.113
Start Time:       Mon, 30 Jun 2025 14:41:55 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:  10.244.0.12
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96kw5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-96kw5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  3m                default-scheduler  Successfully assigned default/sp-pod to functional-920930
Warning  Failed     47s               kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     47s               kubelet            Error: ErrImagePull
Normal   BackOff    46s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     46s               kubelet            Error: ImagePullBackOff
Normal   Pulling    34s (x2 over 3m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-920930 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-920930 logs sp-pod -n default: exit status 1 (82.302155ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-920930 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-920930 -n functional-920930
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 logs -n 25: (1.573493765s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | --dry-run --alsologtostderr                                          |                   |         |         |                     |                     |
	|                | -v=1 --driver=kvm2                                                   |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                             |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                   | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -p functional-920930                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh mount |                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | grep 9p; ls -la /mount-9p; cat                                       |                   |         |         |                     |                     |
	|                | /mount-9p/pod-dates                                                  |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh sudo                                           | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | umount -f /mount-9p                                                  |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | -T /mount1                                                           |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | -T /mount1                                                           |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | -T /mount2                                                           |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | -T /mount3                                                           |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | --kill=true                                                          |                   |         |         |                     |                     |
	| update-context | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format short                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format yaml                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh pgrep                                          | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | buildkitd                                                            |                   |         |         |                     |                     |
	| image          | functional-920930 image build -t                                     | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | localhost/my-image:functional-920930                                 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                     |                   |         |         |                     |                     |
	| image          | functional-920930 image ls                                           | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format json                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format table                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:41:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:41:51.713534 1572366 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:41:51.714254 1572366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:51.714303 1572366 out.go:358] Setting ErrFile to fd 2...
	I0630 14:41:51.714320 1572366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:51.714799 1572366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:41:51.715832 1572366 out.go:352] Setting JSON to false
	I0630 14:41:51.716949 1572366 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":30204,"bootTime":1751264308,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:41:51.717065 1572366 start.go:140] virtualization: kvm guest
	I0630 14:41:51.719204 1572366 out.go:177] * [functional-920930] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:41:51.720785 1572366 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:41:51.720788 1572366 notify.go:220] Checking for updates...
	I0630 14:41:51.723515 1572366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:41:51.725273 1572366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:41:51.727001 1572366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:41:51.728360 1572366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:41:51.729562 1572366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:41:51.731792 1572366 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:41:51.732342 1572366 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:51.732423 1572366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:51.749833 1572366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0630 14:41:51.750368 1572366 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:51.751029 1572366 main.go:141] libmachine: Using API Version  1
	I0630 14:41:51.751100 1572366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:51.751848 1572366 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:51.752116 1572366 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:51.752417 1572366 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:41:51.752842 1572366 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:51.752891 1572366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:51.770672 1572366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0630 14:41:51.771128 1572366 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:51.771736 1572366 main.go:141] libmachine: Using API Version  1
	I0630 14:41:51.771758 1572366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:51.772126 1572366 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:51.772410 1572366 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:51.813022 1572366 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 14:41:51.814347 1572366 start.go:304] selected driver: kvm2
	I0630 14:41:51.814364 1572366 start.go:908] validating driver "kvm2" against &{Name:functional-920930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:functional-920930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:41:51.814524 1572366 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:41:51.815661 1572366 cni.go:84] Creating CNI manager for ""
	I0630 14:41:51.815724 1572366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:41:51.815791 1572366 start.go:347] cluster config:
	{Name:functional-920930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:functional-920930 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:41:51.817459 1572366 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.141255645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294697141232198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddaf73cc-6304-446c-ba32-bfba8fb797b5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.142030561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e271b39f-78f8-4f1c-818e-e228ba322889 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.142103626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e271b39f-78f8-4f1c-818e-e228ba322889 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.142400771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e271b39f-78f8-4f1c-818e-e228ba322889 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.187021445Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c640ca1-bf25-4892-80f5-744b2e65a0af name=/runtime.v1.RuntimeService/Version
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.187092820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c640ca1-bf25-4892-80f5-744b2e65a0af name=/runtime.v1.RuntimeService/Version
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.188450830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed03e04f-0715-4ef0-aff1-d9f4a8b3416b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.189126615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294697189100930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed03e04f-0715-4ef0-aff1-d9f4a8b3416b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.189651589Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef3faff2-d20a-4084-a07d-1e70d053c65c name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.189708700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef3faff2-d20a-4084-a07d-1e70d053c65c name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.190237001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef3faff2-d20a-4084-a07d-1e70d053c65c name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.224675027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22ddc474-7027-4c9b-a1a2-385b53791b23 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.224759584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22ddc474-7027-4c9b-a1a2-385b53791b23 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.226201922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fe560bc-42d9-47eb-9f6f-476d823d8130 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.226832176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294697226810739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fe560bc-42d9-47eb-9f6f-476d823d8130 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.227608666Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=983f7242-d454-4634-a5ce-2d9eb0420a29 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.227657673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=983f7242-d454-4634-a5ce-2d9eb0420a29 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.228116516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=983f7242-d454-4634-a5ce-2d9eb0420a29 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.272885536Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e5372e5-c01f-4004-bbfb-455467e4f83b name=/runtime.v1.RuntimeService/Version
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.273020635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e5372e5-c01f-4004-bbfb-455467e4f83b name=/runtime.v1.RuntimeService/Version
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.274349334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bd99e68-d18a-4bd5-a205-3e0914dea572 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.275490675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294697275460459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bd99e68-d18a-4bd5-a205-3e0914dea572 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.276363319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fb9f2d1-6e80-48c5-8b7d-f339506236ef name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.276434971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fb9f2d1-6e80-48c5-8b7d-f339506236ef name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:44:57 functional-920930 crio[5068]: time="2025-06-30 14:44:57.276742417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fb9f2d1-6e80-48c5-8b7d-f339506236ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e9b9ea50e8845       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    3 minutes ago       Running             echoserver                0                   8940a57dc83b8       hello-node-fcfd88b6f-ggs67
	ae1b257871468       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 minutes ago       Exited              mount-munger              0                   9a797c9b399f0       busybox-mount
	686042f565e61       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    3 minutes ago       Running             echoserver                0                   7490f84c20f02       hello-node-connect-58f9cf68d8-2fgsq
	90862d77cc8b6       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      4 minutes ago       Running             coredns                   2                   7c722315f961d       coredns-674b8bbfcf-dwpq5
	04e2e562d14a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       2                   6d61208c1e53b       storage-provisioner
	f7dab1382edaf       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                      4 minutes ago       Running             kube-proxy                2                   3658db7a8962b       kube-proxy-6gkck
	74b099ec305ed       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e                                      4 minutes ago       Running             kube-apiserver            0                   309b89afbd585       kube-apiserver-functional-920930
	1cee72c70835f       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                      4 minutes ago       Running             kube-controller-manager   2                   c19148f62d2ab       kube-controller-manager-functional-920930
	defc9e44613eb       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                      4 minutes ago       Running             kube-scheduler            3                   beb4e35bc8e74       kube-scheduler-functional-920930
	ad4580dc5ad92       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      4 minutes ago       Running             etcd                      2                   bffb72838797f       etcd-functional-920930
	eb52567ddb962       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                      4 minutes ago       Exited              kube-scheduler            2                   beb4e35bc8e74       kube-scheduler-functional-920930
	2de81fa95be3a       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      4 minutes ago       Exited              coredns                   1                   f41b1eeade261       coredns-674b8bbfcf-dwpq5
	628c9b71ec42e       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                      4 minutes ago       Exited              kube-proxy                1                   a32019f55f5cf       kube-proxy-6gkck
	e904ed633b9af       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                      5 minutes ago       Exited              kube-controller-manager   1                   4c60d47a534d5       kube-controller-manager-functional-920930
	fecd739e675eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       1                   b150df55bd94f       storage-provisioner
	cd18d42edb6d1       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      5 minutes ago       Exited              etcd                      1                   7942bf3b782e4       etcd-functional-920930
	
	
	==> coredns [2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:43161 - 51463 "HINFO IN 5671052926409919755.872030747434307965. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.049518594s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:57131 - 26142 "HINFO IN 5030995905101555605.8240098368165366984. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034335283s
	
	
	==> describe nodes <==
	Name:               functional-920930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-920930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=functional-920930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_39_30_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:39:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-920930
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:44:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:42:42 +0000   Mon, 30 Jun 2025 14:39:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:42:42 +0000   Mon, 30 Jun 2025 14:39:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:42:42 +0000   Mon, 30 Jun 2025 14:39:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:42:42 +0000   Mon, 30 Jun 2025 14:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    functional-920930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011052Ki
	  pods:               110
	System Info:
	  Machine ID:                 091fc3b53bfe4228b4114aaf6e11ec06
	  System UUID:                091fc3b5-3bfe-4228-b411-4aaf6e11ec06
	  Boot ID:                    3d819405-1145-4abc-bc97-f0a5952e9ba8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-2fgsq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  default                     hello-node-fcfd88b6f-ggs67                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  default                     mysql-58ccfd96bb-2hbbf                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    3m53s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-674b8bbfcf-dwpq5                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m23s
	  kube-system                 etcd-functional-920930                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m28s
	  kube-system                 kube-apiserver-functional-920930              250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-functional-920930     200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-proxy-6gkck                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-scheduler-functional-920930              100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-rf2w7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-kk2nt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  Starting                 4m59s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m35s (x8 over 5m35s)  kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m35s (x8 over 5m35s)  kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m35s (x7 over 5m35s)  kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m28s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m28s                  kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s                  kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s                  kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m27s                  kubelet          Node functional-920930 status is now: NodeReady
	  Normal  RegisteredNode           5m24s                  node-controller  Node functional-920930 event: Registered Node functional-920930 in Controller
	  Normal  Starting                 5m4s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m4s (x8 over 5m4s)    kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s (x8 over 5m4s)    kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s (x7 over 5m4s)    kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m58s                  node-controller  Node functional-920930 event: Registered Node functional-920930 in Controller
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s                  node-controller  Node functional-920930 event: Registered Node functional-920930 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.000040] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000295] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Jun30 14:39] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.092998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.103461] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.165029] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.732010] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.128604] kauditd_printk_skb: 81 callbacks suppressed
	[  +4.661998] kauditd_printk_skb: 173 callbacks suppressed
	[Jun30 14:40] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.124008] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.123477] kauditd_printk_skb: 138 callbacks suppressed
	[  +0.066516] kauditd_printk_skb: 63 callbacks suppressed
	[  +7.874879] kauditd_printk_skb: 17 callbacks suppressed
	[Jun30 14:41] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.217146] kauditd_printk_skb: 34 callbacks suppressed
	[ +24.853412] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.480845] 9pnet: p9_fd_create_tcp (8344): problem connecting socket to 192.168.39.1
	[  +5.945689] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:42] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028] <==
	{"level":"info","ts":"2025-06-30T14:40:37.157909Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3af003d6f0036250","local-member-id":"8069059f79d446ff","added-peer-id":"8069059f79d446ff","added-peer-peer-urls":["https://192.168.39.113:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-06-30T14:40:37.158117Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"3af003d6f0036250","local-member-id":"8069059f79d446ff","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T14:40:37.158182Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T14:40:37.165503Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-06-30T14:40:37.165827Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"8069059f79d446ff","initial-advertise-peer-urls":["https://192.168.39.113:2380"],"listen-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.113:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-06-30T14:40:37.165872Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-06-30T14:40:37.165983Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:37.166012Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:39.013343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff is starting a new election at term 3"}
	{"level":"info","ts":"2025-06-30T14:40:39.013404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became pre-candidate at term 3"}
	{"level":"info","ts":"2025-06-30T14:40:39.013440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgPreVoteResp from 8069059f79d446ff at term 3"}
	{"level":"info","ts":"2025-06-30T14:40:39.013454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became candidate at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.013495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgVoteResp from 8069059f79d446ff at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.013505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became leader at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.013512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8069059f79d446ff elected leader 8069059f79d446ff at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.016364Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"8069059f79d446ff","local-member-attributes":"{Name:functional-920930 ClientURLs:[https://192.168.39.113:2379]}","request-path":"/0/members/8069059f79d446ff/attributes","cluster-id":"3af003d6f0036250","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T14:40:39.016549Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:40:39.016883Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:40:39.017420Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:40:39.017558Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T14:40:39.017607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T14:40:39.018102Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T14:40:39.018553Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:40:39.019210Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.113:2379"}
	{"level":"info","ts":"2025-06-30T14:41:44.641422Z","caller":"traceutil/trace.go:171","msg":"trace[2037699269] transaction","detail":"{read_only:false; response_revision:793; number_of_response:1; }","duration":"228.150284ms","start":"2025-06-30T14:41:44.413242Z","end":"2025-06-30T14:41:44.641393Z","steps":["trace[2037699269] 'process raft request'  (duration: 228.031768ms)"],"step_count":1}
	
	
	==> etcd [cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188] <==
	{"level":"info","ts":"2025-06-30T14:39:55.114512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became pre-candidate at term 2"}
	{"level":"info","ts":"2025-06-30T14:39:55.114569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgPreVoteResp from 8069059f79d446ff at term 2"}
	{"level":"info","ts":"2025-06-30T14:39:55.114596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became candidate at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.114658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgVoteResp from 8069059f79d446ff at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.114680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became leader at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.114700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8069059f79d446ff elected leader 8069059f79d446ff at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.120330Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:39:55.121082Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:39:55.121635Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.113:2379"}
	{"level":"info","ts":"2025-06-30T14:39:55.121877Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:39:55.122364Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:39:55.122875Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T14:39:55.120294Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"8069059f79d446ff","local-member-attributes":"{Name:functional-920930 ClientURLs:[https://192.168.39.113:2379]}","request-path":"/0/members/8069059f79d446ff/attributes","cluster-id":"3af003d6f0036250","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T14:39:55.131019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T14:39:55.131066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T14:40:23.306997Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-06-30T14:40:23.308035Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-920930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	{"level":"warn","ts":"2025-06-30T14:40:23.382168Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:40:23.382224Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:40:23.382275Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:40:23.382282Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"info","ts":"2025-06-30T14:40:23.382330Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8069059f79d446ff","current-leader-member-id":"8069059f79d446ff"}
	{"level":"info","ts":"2025-06-30T14:40:23.389987Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:23.390088Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:23.390098Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-920930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	
	
	==> kernel <==
	 14:44:57 up 6 min,  0 users,  load average: 0.17, 0.40, 0.23
	Linux functional-920930 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e] <==
	I0630 14:40:40.286148       1 cache.go:39] Caches are synced for autoregister controller
	I0630 14:40:40.319968       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0630 14:40:40.426660       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0630 14:40:41.154736       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0630 14:40:41.971098       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0630 14:40:42.010584       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0630 14:40:42.039456       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0630 14:40:42.047332       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0630 14:40:43.639026       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:40:43.874665       1 controller.go:667] quota admission added evaluator for: endpoints
	I0630 14:40:43.924528       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0630 14:40:43.976253       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0630 14:40:59.043810       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:40:59.044205       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.17.70"}
	I0630 14:41:02.488213       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:04.616858       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.109.26"}
	I0630 14:41:04.633281       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:06.176890       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:06.184813       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.205.82"}
	I0630 14:41:12.664714       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:12.665154       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.25.211"}
	I0630 14:41:52.950997       1 controller.go:667] quota admission added evaluator for: namespaces
	I0630 14:41:53.284992       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.241.244"}
	I0630 14:41:53.292370       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:53.334412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.40.171"}
	
	
	==> kube-controller-manager [1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859] <==
	I0630 14:40:43.553714       1 shared_informer.go:357] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0630 14:40:43.577042       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 14:40:43.578385       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0630 14:40:43.581096       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0630 14:40:43.583014       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 14:40:43.621033       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0630 14:40:43.797594       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0630 14:40:43.820318       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:43.820505       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0630 14:40:43.821742       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0630 14:40:43.824128       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 14:40:43.833393       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:43.872030       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0630 14:40:44.269400       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:40:44.293554       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:40:44.293586       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:40:44.293594       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0630 14:41:53.060063       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.076088       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.076352       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.091858       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.102085       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.102200       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.109054       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.113648       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82] <==
	I0630 14:39:59.918115       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0630 14:39:59.918178       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0630 14:39:59.918629       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0630 14:39:59.920103       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0630 14:39:59.920307       1 shared_informer.go:357] "Caches are synced" controller="TTL"
	I0630 14:39:59.925191       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0630 14:39:59.925322       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0630 14:39:59.928206       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0630 14:39:59.943007       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0630 14:39:59.943069       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0630 14:39:59.943157       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0630 14:39:59.943279       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-920930"
	I0630 14:39:59.943339       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0630 14:39:59.967874       1 shared_informer.go:357] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0630 14:40:00.019005       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 14:40:00.028389       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0630 14:40:00.067093       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0630 14:40:00.117961       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0630 14:40:00.129551       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:00.137073       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:00.267074       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0630 14:40:00.638352       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:40:00.638382       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:40:00.638389       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0630 14:40:00.648124       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc] <==
	E0630 14:39:58.003479       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:39:58.013306       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	E0630 14:39:58.013594       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:39:58.047614       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:39:58.047642       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:39:58.047662       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:39:58.057350       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:39:58.057659       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:39:58.057710       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:39:58.062763       1 config.go:199] "Starting service config controller"
	I0630 14:39:58.062781       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:39:58.062803       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:39:58.062806       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:39:58.062816       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:39:58.062819       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:39:58.063534       1 config.go:329] "Starting node config controller"
	I0630 14:39:58.064061       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:39:58.163472       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:39:58.163563       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:39:58.163537       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:39:58.164141       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9] <==
	E0630 14:40:40.828155       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:40:40.840141       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	E0630 14:40:40.840317       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:40:40.875016       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:40:40.875078       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:40:40.875101       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:40:40.883154       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:40:40.883493       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:40:40.883534       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:40:40.887292       1 config.go:199] "Starting service config controller"
	I0630 14:40:40.887326       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:40:40.887340       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:40:40.887354       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:40:40.887367       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:40:40.887370       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:40:40.887385       1 config.go:329] "Starting node config controller"
	I0630 14:40:40.887402       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:40:40.987555       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0630 14:40:40.987660       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:40:40.987702       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:40:40.988035       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867] <==
	I0630 14:40:38.041658       1 serving.go:386] Generated self-signed cert in-memory
	W0630 14:40:40.219828       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0630 14:40:40.219905       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0630 14:40:40.219916       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 14:40:40.219922       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 14:40:40.265788       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 14:40:40.269342       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:40:40.272913       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 14:40:40.274764       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:40.274828       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:40.274857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0630 14:40:40.375725       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d] <==
	I0630 14:40:33.215007       1 serving.go:386] Generated self-signed cert in-memory
	W0630 14:40:34.261377       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.113:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.113:8441: connect: connection refused
	W0630 14:40:34.261420       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 14:40:34.261429       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 14:40:34.268807       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 14:40:34.268856       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0630 14:40:34.268874       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0630 14:40:34.270662       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:34.270759       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0630 14:40:34.270806       1 shared_informer.go:353] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:34.271094       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 14:40:34.271240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0630 14:40:34.271294       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0630 14:40:34.271469       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	E0630 14:40:34.271630       1 server.go:271] "handlers are not fully synchronized" err="context canceled"
	E0630 14:40:34.271706       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 30 14:44:06 functional-920930 kubelet[5865]: E0630 14:44:06.411123    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294646410611117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:09 functional-920930 kubelet[5865]: E0630 14:44:09.766875    5865 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:44:09 functional-920930 kubelet[5865]: E0630 14:44:09.766981    5865 kuberuntime_image.go:42] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:44:09 functional-920930 kubelet[5865]: E0630 14:44:09.767206    5865 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96kw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:44:09 functional-920930 kubelet[5865]: E0630 14:44:09.769458    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe"
	Jun 30 14:44:10 functional-920930 kubelet[5865]: E0630 14:44:10.565998    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe"
	Jun 30 14:44:16 functional-920930 kubelet[5865]: E0630 14:44:16.414036    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294656413293409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:16 functional-920930 kubelet[5865]: E0630 14:44:16.414291    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294656413293409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:26 functional-920930 kubelet[5865]: E0630 14:44:26.422100    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294666421553917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:26 functional-920930 kubelet[5865]: E0630 14:44:26.422131    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294666421553917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:36 functional-920930 kubelet[5865]: E0630 14:44:36.335483    5865 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode95e631d-9954-4cfb-b9d4-e3f07d238272/crio-f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052: Error finding container f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052: Status 404 returned error can't find the container with id f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052
	Jun 30 14:44:36 functional-920930 kubelet[5865]: E0630 14:44:36.336273    5865 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2385b9fe58d0fedf6afdb66c5a0f0007/crio-7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109: Error finding container 7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109: Status 404 returned error can't find the container with id 7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109
	Jun 30 14:44:36 functional-920930 kubelet[5865]: E0630 14:44:36.336671    5865 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod5a607c43-dc74-4d3a-bac3-df6dd6d94ca2/crio-b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635: Error finding container b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635: Status 404 returned error can't find the container with id b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635
	Jun 30 14:44:36 functional-920930 kubelet[5865]: E0630 14:44:36.337008    5865 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode07868ddea98fad79007acfd248a84f0/crio-4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd: Error finding container 4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd: Status 404 returned error can't find the container with id 4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd
	Jun 30 14:44:36 functional-920930 kubelet[5865]: E0630 14:44:36.337370    5865 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod20cc71ef-99c9-4845-84c7-8abdcdf41a81/crio-a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707: Error finding container a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707: Status 404 returned error can't find the container with id a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707
	Jun 30 14:44:36 functional-920930 kubelet[5865]: E0630 14:44:36.425260    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294676424527643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:36 functional-920930 kubelet[5865]: E0630 14:44:36.425340    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294676424527643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:46 functional-920930 kubelet[5865]: E0630 14:44:46.427960    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294686427398021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:46 functional-920930 kubelet[5865]: E0630 14:44:46.428004    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294686427398021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:56 functional-920930 kubelet[5865]: E0630 14:44:56.261606    5865 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Jun 30 14:44:56 functional-920930 kubelet[5865]: E0630 14:44:56.261691    5865 kuberuntime_image.go:42] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Jun 30 14:44:56 functional-920930 kubelet[5865]: E0630 14:44:56.262022    5865 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-86hsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-2hbbf_default(e81271f1-2994-4dbc-90dc-d663342b5710): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:44:56 functional-920930 kubelet[5865]: E0630 14:44:56.263449    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-2hbbf" podUID="e81271f1-2994-4dbc-90dc-d663342b5710"
	Jun 30 14:44:56 functional-920930 kubelet[5865]: E0630 14:44:56.432483    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294696431869167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:44:56 functional-920930 kubelet[5865]: E0630 14:44:56.432602    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751294696431869167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1] <==
	W0630 14:44:33.523590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:35.526844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:35.531688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:37.535091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:37.544886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:39.548699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:39.554193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:41.558430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:41.564697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:43.569028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:43.577882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:45.581054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:45.589264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:47.592925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:47.602790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:49.606892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:49.612568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:51.616081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:51.624854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:53.628114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:53.633904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:55.636361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:55.646838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:57.655023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:44:57.664204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7] <==
	I0630 14:39:57.695452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0630 14:39:57.765283       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0630 14:39:57.765331       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0630 14:39:57.789184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:01.252742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:05.519143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:09.118547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:12.172013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:15.194883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:15.201225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0630 14:40:15.201372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0630 14:40:15.201895       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df68d1a8-42ba-415d-b59a-ce223f2e6b54", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-920930_ecb68940-66a7-47a0-9ca5-e5807b7886d2 became leader
	I0630 14:40:15.202295       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-920930_ecb68940-66a7-47a0-9ca5-e5807b7886d2!
	W0630 14:40:15.205119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:15.211461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0630 14:40:15.303387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-920930_ecb68940-66a7-47a0-9ca5-e5807b7886d2!
	W0630 14:40:17.215167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:17.224366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:19.227471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:19.237822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:21.240881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:21.246290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:23.249317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:23.256612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-920930 -n functional-920930
helpers_test.go:261: (dbg) Run:  kubectl --context functional-920930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-920930 describe pod busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-920930 describe pod busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt: exit status 1 (89.572321ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-920930/192.168.39.113
	Start Time:       Mon, 30 Jun 2025 14:41:07 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 30 Jun 2025 14:41:43 +0000
	      Finished:     Mon, 30 Jun 2025 14:41:43 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m858h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-m858h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-920930
	  Normal  Pulling    3m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m15s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.354s (35.793s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m15s  kubelet            Created container: mount-munger
	  Normal  Started    3m14s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-2hbbf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-920930/192.168.39.113
	Start Time:       Mon, 30 Jun 2025 14:41:04 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86hsc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-86hsc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m53s                 default-scheduler  Successfully assigned default/mysql-58ccfd96bb-2hbbf to functional-920930
	  Warning  Failed     3m22s                 kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m8s (x2 over 3m21s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m8s (x2 over 3m21s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    114s (x3 over 3m53s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2s (x3 over 3m22s)    kubelet            Error: ErrImagePull
	  Warning  Failed     2s (x2 over 2m24s)    kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-920930/192.168.39.113
	Start Time:       Mon, 30 Jun 2025 14:41:55 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96kw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-96kw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/sp-pod to functional-920930
	  Warning  Failed     49s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     49s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    48s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     48s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    36s (x2 over 3m2s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-rf2w7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-kk2nt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-920930 describe pod busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt: exit status 1
E0630 14:45:20.917332 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:45:48.622082 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-920930 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-2hbbf" [e81271f1-2994-4dbc-90dc-d663342b5710] Pending
helpers_test.go:344: "mysql-58ccfd96bb-2hbbf" [e81271f1-2994-4dbc-90dc-d663342b5710] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-920930 -n functional-920930
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-06-30 14:51:04.987816935 +0000 UTC m=+2008.075697748
functional_test.go:1816: (dbg) Run:  kubectl --context functional-920930 describe po mysql-58ccfd96bb-2hbbf -n default
functional_test.go:1816: (dbg) kubectl --context functional-920930 describe po mysql-58ccfd96bb-2hbbf -n default:
Name:             mysql-58ccfd96bb-2hbbf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-920930/192.168.39.113
Start Time:       Mon, 30 Jun 2025 14:41:04 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86hsc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-86hsc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-2hbbf to functional-920930
Warning  Failed     3m56s (x3 over 8m31s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m25s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     71s (x2 over 9m29s)    kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     71s (x5 over 9m29s)    kubelet            Error: ErrImagePull
Normal   BackOff    7s (x15 over 9m28s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     7s (x15 over 9m28s)    kubelet            Error: ImagePullBackOff
functional_test.go:1816: (dbg) Run:  kubectl --context functional-920930 logs mysql-58ccfd96bb-2hbbf -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-920930 logs mysql-58ccfd96bb-2hbbf -n default: exit status 1 (85.76658ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-2hbbf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-920930 logs mysql-58ccfd96bb-2hbbf -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-920930 -n functional-920930
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 logs -n 25: (1.577912898s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | --dry-run --alsologtostderr                                          |                   |         |         |                     |                     |
	|                | -v=1 --driver=kvm2                                                   |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                             |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                   | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -p functional-920930                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh mount |                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | grep 9p; ls -la /mount-9p; cat                                       |                   |         |         |                     |                     |
	|                | /mount-9p/pod-dates                                                  |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh sudo                                           | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:41 UTC |                     |
	|                | umount -f /mount-9p                                                  |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | -T /mount1                                                           |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | -T /mount1                                                           |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | -T /mount2                                                           |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh findmnt                                        | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | -T /mount3                                                           |                   |         |         |                     |                     |
	| mount          | -p functional-920930                                                 | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | --kill=true                                                          |                   |         |         |                     |                     |
	| update-context | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format short                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format yaml                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh            | functional-920930 ssh pgrep                                          | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC |                     |
	|                | buildkitd                                                            |                   |         |         |                     |                     |
	| image          | functional-920930 image build -t                                     | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | localhost/my-image:functional-920930                                 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                     |                   |         |         |                     |                     |
	| image          | functional-920930 image ls                                           | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format json                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-920930                                                    | functional-920930 | jenkins | v1.36.0 | 30 Jun 25 14:42 UTC | 30 Jun 25 14:42 UTC |
	|                | image ls --format table                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:41:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:41:51.713534 1572366 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:41:51.714254 1572366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:51.714303 1572366 out.go:358] Setting ErrFile to fd 2...
	I0630 14:41:51.714320 1572366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:51.714799 1572366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:41:51.715832 1572366 out.go:352] Setting JSON to false
	I0630 14:41:51.716949 1572366 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":30204,"bootTime":1751264308,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:41:51.717065 1572366 start.go:140] virtualization: kvm guest
	I0630 14:41:51.719204 1572366 out.go:177] * [functional-920930] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:41:51.720785 1572366 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:41:51.720788 1572366 notify.go:220] Checking for updates...
	I0630 14:41:51.723515 1572366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:41:51.725273 1572366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:41:51.727001 1572366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:41:51.728360 1572366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:41:51.729562 1572366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:41:51.731792 1572366 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:41:51.732342 1572366 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:51.732423 1572366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:51.749833 1572366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0630 14:41:51.750368 1572366 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:51.751029 1572366 main.go:141] libmachine: Using API Version  1
	I0630 14:41:51.751100 1572366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:51.751848 1572366 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:51.752116 1572366 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:51.752417 1572366 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:41:51.752842 1572366 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:51.752891 1572366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:51.770672 1572366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0630 14:41:51.771128 1572366 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:51.771736 1572366 main.go:141] libmachine: Using API Version  1
	I0630 14:41:51.771758 1572366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:51.772126 1572366 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:51.772410 1572366 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:51.813022 1572366 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 14:41:51.814347 1572366 start.go:304] selected driver: kvm2
	I0630 14:41:51.814364 1572366 start.go:908] validating driver "kvm2" against &{Name:functional-920930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:functional-920930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:41:51.814524 1572366 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:41:51.815661 1572366 cni.go:84] Creating CNI manager for ""
	I0630 14:41:51.815724 1572366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:41:51.815791 1572366 start.go:347] cluster config:
	{Name:functional-920930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:functional-920930 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:41:51.817459 1572366 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jun 30 14:51:05 functional-920930 crio[5068]: time="2025-06-30 14:51:05.974774283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295065974749186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3acc5459-db77-46d2-9419-b5d6e4837e30 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:51:05 functional-920930 crio[5068]: time="2025-06-30 14:51:05.975406019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87d7fe82-6e7b-4040-9feb-247d714a0a59 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:05 functional-920930 crio[5068]: time="2025-06-30 14:51:05.975468967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87d7fe82-6e7b-4040-9feb-247d714a0a59 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:05 functional-920930 crio[5068]: time="2025-06-30 14:51:05.975749656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87d7fe82-6e7b-4040-9feb-247d714a0a59 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.023084047Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5acce680-f78d-4140-a89e-67b99c9199df name=/runtime.v1.RuntimeService/Version
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.023172050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5acce680-f78d-4140-a89e-67b99c9199df name=/runtime.v1.RuntimeService/Version
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.024897166Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd1dc45e-9f3b-455e-858a-ac85b7d5cb77 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.025657913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295066025635011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd1dc45e-9f3b-455e-858a-ac85b7d5cb77 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.026401593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad2d8c0f-7763-4865-9b02-117792f763c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.026468579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad2d8c0f-7763-4865-9b02-117792f763c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.026757267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad2d8c0f-7763-4865-9b02-117792f763c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.064177870Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48e008a2-c9b4-43a1-a5c8-e8b558510147 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.064311535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48e008a2-c9b4-43a1-a5c8-e8b558510147 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.066034822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=888b29ac-8604-4574-b0e7-dcf30f609b65 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.066693879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295066066668683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=888b29ac-8604-4574-b0e7-dcf30f609b65 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.067487722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afe5cefc-6cbc-4f51-b845-017933d73908 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.067553222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afe5cefc-6cbc-4f51-b845-017933d73908 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.067838015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afe5cefc-6cbc-4f51-b845-017933d73908 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.103701096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c2aa723-8621-4a87-9e97-c7083521f854 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.103839213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c2aa723-8621-4a87-9e97-c7083521f854 name=/runtime.v1.RuntimeService/Version
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.105520891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06c263a1-9758-4885-a338-1633ec0b63e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.106484234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295066106447052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06c263a1-9758-4885-a338-1633ec0b63e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.107466466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f68ae21-8771-4789-88cc-cb0246ab1692 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.107638117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f68ae21-8771-4789-88cc-cb0246ab1692 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 14:51:06 functional-920930 crio[5068]: time="2025-06-30 14:51:06.108270734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9b9ea50e884536d890d2a93d4b122e97b002e341bc2661d65b40c02c1ed0b3a,PodSandboxId:8940a57dc83b80559c6865c6dcae66597a80542186cf56c3c757791074542767,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294503557676071,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-ggs67,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e400a30-5d07-4bfd-8e18-be55dd8c1b8f,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef,PodSandboxId:9a797c9b399f0f763a1222cf8bbcd4c37d2487a2a0774ba75075fdcb89e3d0f5,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1751294503478503356,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 671e6fc7-8830-4da1-9cb1-954a8917a998,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686042f565e616bcce2c8fc7c5dda968d7f87e6409632cc7d5f457966815eb65,PodSandboxId:7490f84c20f02fea5006a88d6d7def2d125bd030cd44ea9067a95f6e9357a47e,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1751294500118613282,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-2fgsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f0ce418-59cc-4ca6-bd22-780c56a99932,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9,PodSandboxId:3658db7a8962bd13382d4a52ddf67f49c71ab1a731ef1940b2caa4b5b398f131,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751294440557064910,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1,PodSandboxId:6d61208c1e53b10579259d5d3f6214a54b5a8685518946a568680e008d663441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751294440569611081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09,PodSandboxId:7c722315f961d15a752d12c7e79a5570cc34f72491f138d0fcb17b966bcba138,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751294440583188039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e,PodSandboxId:309b89afbd585ef6fe1874b5a24b7ad65eb84d370ca186e620b54e3a577f7cbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751294437062375772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d511bf1c483fd626e2323fcdf9c3ebdb,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859,PodSandboxId:c19148f62d2ab503a0c1ef81d741fb4819334414dc7082b875b214146eaa7939,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751294436927706459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751294436914839761,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028,PodSandboxId:bffb72838797ff1a55274d824220309473a3d3cb6d39c9decd6bb7947d102a04,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751294436884050258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,}
,Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d,PodSandboxId:beb4e35bc8e74a9ed7ae63cbc91b10a3ffb29cb63bb61101da61ac4c169f250d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751294432218722765,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43bdb33270c477f3aa244e7772087ee1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148,PodSandboxId:f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751294397715269199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-dwpq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95e631d-9954-4cfb-b9d4-e3f07d238272,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc,PodSandboxId:a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751294397673421234,Labels:map[string]string{io.kubernetes.container.name: kube-prox
y,io.kubernetes.pod.name: kube-proxy-6gkck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20cc71ef-99c9-4845-84c7-8abdcdf41a81,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7,PodSandboxId:b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751294391244422740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.
pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a607c43-dc74-4d3a-bac3-df6dd6d94ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82,PodSandboxId:4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751294391268506766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-functional-920930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e07868ddea98fad79007acfd248a84f0,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188,PodSandboxId:7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751294391231097555,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-92093
0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2385b9fe58d0fedf6afdb66c5a0f0007,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f68ae21-8771-4789-88cc-cb0246ab1692 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e9b9ea50e8845       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    9 minutes ago       Running             echoserver                0                   8940a57dc83b8       hello-node-fcfd88b6f-ggs67
	ae1b257871468       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   9a797c9b399f0       busybox-mount
	686042f565e61       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    9 minutes ago       Running             echoserver                0                   7490f84c20f02       hello-node-connect-58f9cf68d8-2fgsq
	90862d77cc8b6       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      10 minutes ago      Running             coredns                   2                   7c722315f961d       coredns-674b8bbfcf-dwpq5
	04e2e562d14a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       2                   6d61208c1e53b       storage-provisioner
	f7dab1382edaf       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                      10 minutes ago      Running             kube-proxy                2                   3658db7a8962b       kube-proxy-6gkck
	74b099ec305ed       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e                                      10 minutes ago      Running             kube-apiserver            0                   309b89afbd585       kube-apiserver-functional-920930
	1cee72c70835f       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                      10 minutes ago      Running             kube-controller-manager   2                   c19148f62d2ab       kube-controller-manager-functional-920930
	defc9e44613eb       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                      10 minutes ago      Running             kube-scheduler            3                   beb4e35bc8e74       kube-scheduler-functional-920930
	ad4580dc5ad92       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      10 minutes ago      Running             etcd                      2                   bffb72838797f       etcd-functional-920930
	eb52567ddb962       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b                                      10 minutes ago      Exited              kube-scheduler            2                   beb4e35bc8e74       kube-scheduler-functional-920930
	2de81fa95be3a       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      11 minutes ago      Exited              coredns                   1                   f41b1eeade261       coredns-674b8bbfcf-dwpq5
	628c9b71ec42e       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19                                      11 minutes ago      Exited              kube-proxy                1                   a32019f55f5cf       kube-proxy-6gkck
	e904ed633b9af       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2                                      11 minutes ago      Exited              kube-controller-manager   1                   4c60d47a534d5       kube-controller-manager-functional-920930
	fecd739e675eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       1                   b150df55bd94f       storage-provisioner
	cd18d42edb6d1       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      11 minutes ago      Exited              etcd                      1                   7942bf3b782e4       etcd-functional-920930
	
	
	==> coredns [2de81fa95be3a0f93a4643e03a074a159e08454b7ff651ed3d927106c1b74148] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:43161 - 51463 "HINFO IN 5671052926409919755.872030747434307965. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.049518594s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [90862d77cc8b68511e54e9f6f639637ba185742bc4a55d5098373d623eb55f09] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:57131 - 26142 "HINFO IN 5030995905101555605.8240098368165366984. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034335283s
	
	
	==> describe nodes <==
	Name:               functional-920930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-920930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=functional-920930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_39_30_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:39:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-920930
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:51:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:47:49 +0000   Mon, 30 Jun 2025 14:39:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:47:49 +0000   Mon, 30 Jun 2025 14:39:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:47:49 +0000   Mon, 30 Jun 2025 14:39:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:47:49 +0000   Mon, 30 Jun 2025 14:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    functional-920930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011052Ki
	  pods:               110
	System Info:
	  Machine ID:                 091fc3b53bfe4228b4114aaf6e11ec06
	  System UUID:                091fc3b5-3bfe-4228-b411-4aaf6e11ec06
	  Boot ID:                    3d819405-1145-4abc-bc97-f0a5952e9ba8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-2fgsq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-fcfd88b6f-ggs67                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  default                     mysql-58ccfd96bb-2hbbf                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 coredns-674b8bbfcf-dwpq5                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-920930                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-functional-920930              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-920930     200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-6gkck                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-920930              100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-rf2w7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-kk2nt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m                kubelet          Node functional-920930 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node functional-920930 event: Registered Node functional-920930 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-920930 event: Registered Node functional-920930 in Controller
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-920930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-920930 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-920930 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-920930 event: Registered Node functional-920930 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.000040] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000295] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Jun30 14:39] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.092998] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.103461] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.165029] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.732010] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.128604] kauditd_printk_skb: 81 callbacks suppressed
	[  +4.661998] kauditd_printk_skb: 173 callbacks suppressed
	[Jun30 14:40] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.124008] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.123477] kauditd_printk_skb: 138 callbacks suppressed
	[  +0.066516] kauditd_printk_skb: 63 callbacks suppressed
	[  +7.874879] kauditd_printk_skb: 17 callbacks suppressed
	[Jun30 14:41] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.217146] kauditd_printk_skb: 34 callbacks suppressed
	[ +24.853412] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.480845] 9pnet: p9_fd_create_tcp (8344): problem connecting socket to 192.168.39.1
	[  +5.945689] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:42] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [ad4580dc5ad9242e37442263f1371d824e182f2f2645dbe60fed43992de99028] <==
	{"level":"info","ts":"2025-06-30T14:40:37.165503Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-06-30T14:40:37.165827Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"8069059f79d446ff","initial-advertise-peer-urls":["https://192.168.39.113:2380"],"listen-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.113:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-06-30T14:40:37.165872Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-06-30T14:40:37.165983Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:37.166012Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:39.013343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff is starting a new election at term 3"}
	{"level":"info","ts":"2025-06-30T14:40:39.013404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became pre-candidate at term 3"}
	{"level":"info","ts":"2025-06-30T14:40:39.013440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgPreVoteResp from 8069059f79d446ff at term 3"}
	{"level":"info","ts":"2025-06-30T14:40:39.013454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became candidate at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.013495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgVoteResp from 8069059f79d446ff at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.013505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became leader at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.013512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8069059f79d446ff elected leader 8069059f79d446ff at term 4"}
	{"level":"info","ts":"2025-06-30T14:40:39.016364Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"8069059f79d446ff","local-member-attributes":"{Name:functional-920930 ClientURLs:[https://192.168.39.113:2379]}","request-path":"/0/members/8069059f79d446ff/attributes","cluster-id":"3af003d6f0036250","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T14:40:39.016549Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:40:39.016883Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:40:39.017420Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:40:39.017558Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T14:40:39.017607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T14:40:39.018102Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T14:40:39.018553Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:40:39.019210Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.113:2379"}
	{"level":"info","ts":"2025-06-30T14:41:44.641422Z","caller":"traceutil/trace.go:171","msg":"trace[2037699269] transaction","detail":"{read_only:false; response_revision:793; number_of_response:1; }","duration":"228.150284ms","start":"2025-06-30T14:41:44.413242Z","end":"2025-06-30T14:41:44.641393Z","steps":["trace[2037699269] 'process raft request'  (duration: 228.031768ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:50:39.043736Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1106}
	{"level":"info","ts":"2025-06-30T14:50:39.072901Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1106,"took":"28.602992ms","hash":3462812080,"current-db-size-bytes":4194304,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":1748992,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-06-30T14:50:39.073022Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3462812080,"revision":1106,"compact-revision":-1}
	
	
	==> etcd [cd18d42edb6d18f2468de45aa9b27215bdf38dc80824552c4c3f92c22f1d2188] <==
	{"level":"info","ts":"2025-06-30T14:39:55.114512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became pre-candidate at term 2"}
	{"level":"info","ts":"2025-06-30T14:39:55.114569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgPreVoteResp from 8069059f79d446ff at term 2"}
	{"level":"info","ts":"2025-06-30T14:39:55.114596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became candidate at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.114658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff received MsgVoteResp from 8069059f79d446ff at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.114680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8069059f79d446ff became leader at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.114700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8069059f79d446ff elected leader 8069059f79d446ff at term 3"}
	{"level":"info","ts":"2025-06-30T14:39:55.120330Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:39:55.121082Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:39:55.121635Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.113:2379"}
	{"level":"info","ts":"2025-06-30T14:39:55.121877Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:39:55.122364Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:39:55.122875Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T14:39:55.120294Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"8069059f79d446ff","local-member-attributes":"{Name:functional-920930 ClientURLs:[https://192.168.39.113:2379]}","request-path":"/0/members/8069059f79d446ff/attributes","cluster-id":"3af003d6f0036250","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T14:39:55.131019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T14:39:55.131066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T14:40:23.306997Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-06-30T14:40:23.308035Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-920930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	{"level":"warn","ts":"2025-06-30T14:40:23.382168Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:40:23.382224Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:40:23.382275Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:40:23.382282Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"info","ts":"2025-06-30T14:40:23.382330Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8069059f79d446ff","current-leader-member-id":"8069059f79d446ff"}
	{"level":"info","ts":"2025-06-30T14:40:23.389987Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:23.390088Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-06-30T14:40:23.390098Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-920930","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	
	
	==> kernel <==
	 14:51:06 up 12 min,  0 users,  load average: 0.12, 0.28, 0.24
	Linux functional-920930 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [74b099ec305ede665022bcc7ed25abeadf8cfa13d20965fcb26a735976f9126e] <==
	I0630 14:40:40.319968       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0630 14:40:40.426660       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0630 14:40:41.154736       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0630 14:40:41.971098       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0630 14:40:42.010584       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0630 14:40:42.039456       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0630 14:40:42.047332       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0630 14:40:43.639026       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:40:43.874665       1 controller.go:667] quota admission added evaluator for: endpoints
	I0630 14:40:43.924528       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0630 14:40:43.976253       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0630 14:40:59.043810       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:40:59.044205       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.17.70"}
	I0630 14:41:02.488213       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:04.616858       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.109.26"}
	I0630 14:41:04.633281       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:06.176890       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:06.184813       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.205.82"}
	I0630 14:41:12.664714       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:12.665154       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.25.211"}
	I0630 14:41:52.950997       1 controller.go:667] quota admission added evaluator for: namespaces
	I0630 14:41:53.284992       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.241.244"}
	I0630 14:41:53.292370       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:41:53.334412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.40.171"}
	I0630 14:50:40.217555       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1cee72c70835f54e69907901c7af6c2c803ee35892ab43567264ce1ee9f95859] <==
	I0630 14:40:43.553714       1 shared_informer.go:357] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0630 14:40:43.577042       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 14:40:43.578385       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0630 14:40:43.581096       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0630 14:40:43.583014       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 14:40:43.621033       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0630 14:40:43.797594       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0630 14:40:43.820318       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:43.820505       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0630 14:40:43.821742       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0630 14:40:43.824128       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 14:40:43.833393       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:43.872030       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0630 14:40:44.269400       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:40:44.293554       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:40:44.293586       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:40:44.293594       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0630 14:41:53.060063       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.076088       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.076352       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.091858       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.102085       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.102200       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.109054       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:41:53.113648       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e904ed633b9afe47308aecc2ad9f4b63f3dbcb55fff94618ff04e740df1bab82] <==
	I0630 14:39:59.918115       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0630 14:39:59.918178       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0630 14:39:59.918629       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0630 14:39:59.920103       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0630 14:39:59.920307       1 shared_informer.go:357] "Caches are synced" controller="TTL"
	I0630 14:39:59.925191       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0630 14:39:59.925322       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0630 14:39:59.928206       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0630 14:39:59.943007       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0630 14:39:59.943069       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0630 14:39:59.943157       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0630 14:39:59.943279       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-920930"
	I0630 14:39:59.943339       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0630 14:39:59.967874       1 shared_informer.go:357] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0630 14:40:00.019005       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 14:40:00.028389       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0630 14:40:00.067093       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0630 14:40:00.117961       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0630 14:40:00.129551       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:00.137073       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:40:00.267074       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0630 14:40:00.638352       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:40:00.638382       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:40:00.638389       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0630 14:40:00.648124       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [628c9b71ec42e89c2179a03fe95f29fcd3100ba90e1cb3d05167bd1459cd9cfc] <==
	E0630 14:39:58.003479       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:39:58.013306       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	E0630 14:39:58.013594       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:39:58.047614       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:39:58.047642       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:39:58.047662       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:39:58.057350       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:39:58.057659       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:39:58.057710       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:39:58.062763       1 config.go:199] "Starting service config controller"
	I0630 14:39:58.062781       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:39:58.062803       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:39:58.062806       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:39:58.062816       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:39:58.062819       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:39:58.063534       1 config.go:329] "Starting node config controller"
	I0630 14:39:58.064061       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:39:58.163472       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:39:58.163563       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:39:58.163537       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:39:58.164141       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [f7dab1382edafbf0e2a7ee60605e8e7c8e0e809b361e5726466ab3ab81448bf9] <==
	E0630 14:40:40.828155       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:40:40.840141       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	E0630 14:40:40.840317       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:40:40.875016       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:40:40.875078       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:40:40.875101       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:40:40.883154       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:40:40.883493       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:40:40.883534       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:40:40.887292       1 config.go:199] "Starting service config controller"
	I0630 14:40:40.887326       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:40:40.887340       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:40:40.887354       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:40:40.887367       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:40:40.887370       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:40:40.887385       1 config.go:329] "Starting node config controller"
	I0630 14:40:40.887402       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:40:40.987555       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0630 14:40:40.987660       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:40:40.987702       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:40:40.988035       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [defc9e44613eb5048116c8b40384ae180e3b6a82f3cdeab71b59f909a0dbd867] <==
	I0630 14:40:38.041658       1 serving.go:386] Generated self-signed cert in-memory
	W0630 14:40:40.219828       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0630 14:40:40.219905       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0630 14:40:40.219916       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 14:40:40.219922       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 14:40:40.265788       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 14:40:40.269342       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:40:40.272913       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 14:40:40.274764       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:40.274828       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:40.274857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0630 14:40:40.375725       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [eb52567ddb9629e63a762c9c8afef2e58973e9d322ed192d6492a42a6560f36d] <==
	I0630 14:40:33.215007       1 serving.go:386] Generated self-signed cert in-memory
	W0630 14:40:34.261377       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.113:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.113:8441: connect: connection refused
	W0630 14:40:34.261420       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 14:40:34.261429       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 14:40:34.268807       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 14:40:34.268856       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0630 14:40:34.268874       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0630 14:40:34.270662       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:34.270759       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0630 14:40:34.270806       1 shared_informer.go:353] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 14:40:34.271094       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 14:40:34.271240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0630 14:40:34.271294       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0630 14:40:34.271469       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	E0630 14:40:34.271630       1 server.go:271] "handlers are not fully synchronized" err="context canceled"
	E0630 14:40:34.271706       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 30 14:50:26 functional-920930 kubelet[5865]: E0630 14:50:26.537610    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295026536704092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:50:33 functional-920930 kubelet[5865]: E0630 14:50:33.237970    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-2hbbf" podUID="e81271f1-2994-4dbc-90dc-d663342b5710"
	Jun 30 14:50:35 functional-920930 kubelet[5865]: E0630 14:50:35.237186    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-rf2w7" podUID="1740de54-1854-4da6-840d-f02c9d630879"
	Jun 30 14:50:36 functional-920930 kubelet[5865]: E0630 14:50:36.335768    5865 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode95e631d-9954-4cfb-b9d4-e3f07d238272/crio-f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052: Error finding container f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052: Status 404 returned error can't find the container with id f41b1eeade2617688c1a0f6bda5c295402fbe7c0bd7239523a380a9af1dfd052
	Jun 30 14:50:36 functional-920930 kubelet[5865]: E0630 14:50:36.336852    5865 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod5a607c43-dc74-4d3a-bac3-df6dd6d94ca2/crio-b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635: Error finding container b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635: Status 404 returned error can't find the container with id b150df55bd94f23ef98b100d78efbdf9c61a253c74dd745c0774de18cb995635
	Jun 30 14:50:36 functional-920930 kubelet[5865]: E0630 14:50:36.337203    5865 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod20cc71ef-99c9-4845-84c7-8abdcdf41a81/crio-a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707: Error finding container a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707: Status 404 returned error can't find the container with id a32019f55f5cfe9102d91b803b3adbf16701a1209f5c852e53df4a3981e5d707
	Jun 30 14:50:36 functional-920930 kubelet[5865]: E0630 14:50:36.337463    5865 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode07868ddea98fad79007acfd248a84f0/crio-4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd: Error finding container 4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd: Status 404 returned error can't find the container with id 4c60d47a534d59b04831c1469b5277ee54134920aa4263bb05fd2ad77c258dcd
	Jun 30 14:50:36 functional-920930 kubelet[5865]: E0630 14:50:36.337688    5865 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2385b9fe58d0fedf6afdb66c5a0f0007/crio-7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109: Error finding container 7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109: Status 404 returned error can't find the container with id 7942bf3b782e412e13234ea6358c8afe4ec6d2b18fe51567665df038871d4109
	Jun 30 14:50:36 functional-920930 kubelet[5865]: E0630 14:50:36.539606    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295036539287456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:50:36 functional-920930 kubelet[5865]: E0630 14:50:36.539650    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295036539287456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:50:40 functional-920930 kubelet[5865]: E0630 14:50:40.237478    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-kk2nt" podUID="d2f64fa9-28ce-4baf-9b83-5e54c01e3a90"
	Jun 30 14:50:44 functional-920930 kubelet[5865]: E0630 14:50:44.241699    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-2hbbf" podUID="e81271f1-2994-4dbc-90dc-d663342b5710"
	Jun 30 14:50:46 functional-920930 kubelet[5865]: E0630 14:50:46.541520    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295046541202676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:50:46 functional-920930 kubelet[5865]: E0630 14:50:46.541558    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295046541202676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:50:52 functional-920930 kubelet[5865]: E0630 14:50:52.239333    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-kk2nt" podUID="d2f64fa9-28ce-4baf-9b83-5e54c01e3a90"
	Jun 30 14:50:56 functional-920930 kubelet[5865]: E0630 14:50:56.544206    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295056543857437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:50:56 functional-920930 kubelet[5865]: E0630 14:50:56.544734    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295056543857437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:50:57 functional-920930 kubelet[5865]: E0630 14:50:57.847808    5865 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:50:57 functional-920930 kubelet[5865]: E0630 14:50:57.848260    5865 kuberuntime_image.go:42] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:50:57 functional-920930 kubelet[5865]: E0630 14:50:57.848576    5865 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96kw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:50:57 functional-920930 kubelet[5865]: E0630 14:50:57.850362    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fa16e84e-8e25-408f-a8ff-a1c6d9ea9cfe"
	Jun 30 14:50:58 functional-920930 kubelet[5865]: E0630 14:50:58.240175    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-2hbbf" podUID="e81271f1-2994-4dbc-90dc-d663342b5710"
	Jun 30 14:51:05 functional-920930 kubelet[5865]: E0630 14:51:05.238629    5865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-kk2nt" podUID="d2f64fa9-28ce-4baf-9b83-5e54c01e3a90"
	Jun 30 14:51:06 functional-920930 kubelet[5865]: E0630 14:51:06.546754    5865 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295066546408373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 14:51:06 functional-920930 kubelet[5865]: E0630 14:51:06.546784    5865 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751295066546408373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212364,},InodesUsed:&UInt64Value{Value:107,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [04e2e562d14a8acb6138dd37ed850277e35bcbcb321bd9e7d10249f15753a0d1] <==
	W0630 14:50:41.434893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:43.438105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:43.443889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:45.447503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:45.452877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:47.457897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:47.464328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:49.467878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:49.472905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:51.476988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:51.481773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:53.485637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:53.491455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:55.494658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:55.504798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:57.508229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:57.519109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:59.523068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:50:59.528731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:51:01.531881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:51:01.537379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:51:03.540647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:51:03.545492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:51:05.549359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:51:05.562195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fecd739e675ebaa30a22543315a0fca357d3ebddc6bcaf0357a425170ebb8fd7] <==
	I0630 14:39:57.695452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0630 14:39:57.765283       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0630 14:39:57.765331       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0630 14:39:57.789184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:01.252742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:05.519143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:09.118547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:12.172013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:15.194883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:15.201225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0630 14:40:15.201372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0630 14:40:15.201895       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df68d1a8-42ba-415d-b59a-ce223f2e6b54", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-920930_ecb68940-66a7-47a0-9ca5-e5807b7886d2 became leader
	I0630 14:40:15.202295       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-920930_ecb68940-66a7-47a0-9ca5-e5807b7886d2!
	W0630 14:40:15.205119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:15.211461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0630 14:40:15.303387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-920930_ecb68940-66a7-47a0-9ca5-e5807b7886d2!
	W0630 14:40:17.215167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:17.224366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:19.227471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:19.237822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:21.240881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:21.246290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:23.249317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:40:23.256612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-920930 -n functional-920930
helpers_test.go:261: (dbg) Run:  kubectl --context functional-920930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-920930 describe pod busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-920930 describe pod busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt: exit status 1 (96.088751ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-920930/192.168.39.113
	Start Time:       Mon, 30 Jun 2025 14:41:07 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ae1b2578714684c9b16ce317577e269621c27b1fe871806ab82df86ea36b6fef
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 30 Jun 2025 14:41:43 +0000
	      Finished:     Mon, 30 Jun 2025 14:41:43 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m858h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-m858h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-920930
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m24s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.354s (35.793s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m24s  kubelet            Created container: mount-munger
	  Normal  Started    9m23s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-2hbbf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-920930/192.168.39.113
	Start Time:       Mon, 30 Jun 2025 14:41:04 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86hsc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-86hsc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-2hbbf to functional-920930
	  Warning  Failed     3m58s (x3 over 8m33s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m27s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     73s (x2 over 9m31s)    kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     73s (x5 over 9m31s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x15 over 9m30s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     9s (x15 over 9m30s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-920930/192.168.39.113
	Start Time:       Mon, 30 Jun 2025 14:41:55 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96kw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-96kw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m11s                 default-scheduler  Successfully assigned default/sp-pod to functional-920930
	  Normal   BackOff    106s (x5 over 6m57s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     106s (x5 over 6m57s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    94s (x4 over 9m11s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     10s (x4 over 6m58s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     10s (x4 over 6m58s)   kubelet            Error: ErrImagePull

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-rf2w7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-kk2nt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-920930 describe pod busybox-mount mysql-58ccfd96bb-2hbbf sp-pod dashboard-metrics-scraper-5d59dccf9b-rf2w7 kubernetes-dashboard-7779f9b69b-kk2nt: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (603.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdspecific-port3831704777/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.460589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:41:47.337858 1557732 retry.go:31] will retry after 459.137423ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (209.613789ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:41:48.007325 1557732 retry.go:31] will retry after 782.938897ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.442852ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:41:48.994558 1557732 retry.go:31] will retry after 1.226700896s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.757029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:41:50.433377 1557732 retry.go:31] will retry after 1.630253517s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (216.372709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:41:52.280231 1557732 retry.go:31] will retry after 1.307661158s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (217.100145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:41:53.805578 1557732 retry.go:31] will retry after 5.596900079s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (208.287797ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 12.477163346s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (212.700438ms)

                                                
                                                
-- stdout --
	total 0
	drwxr-xr-x  2 root root  40 Jun 30 14:41 .
	drwxr-xr-x 20 root root 560 Jun 30 14:41 ..
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-amd64 -p functional-920930 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "sudo umount -f /mount-9p": exit status 1 (214.28678ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-920930 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdspecific-port3831704777/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdspecific-port3831704777/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdspecific-port3831704777/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:46464
* Userspace file server: ufs starting
* Userspace file server is shutdown

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdspecific-port3831704777/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I0630 14:41:47.180198 1571685 out.go:345] Setting OutFile to fd 1 ...
I0630 14:41:47.180474 1571685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:41:47.180483 1571685 out.go:358] Setting ErrFile to fd 2...
I0630 14:41:47.180487 1571685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:41:47.180701 1571685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
I0630 14:41:47.180979 1571685 mustload.go:65] Loading cluster: functional-920930
I0630 14:41:47.181310 1571685 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:41:47.181676 1571685 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:41:47.181750 1571685 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:41:47.197646 1571685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
I0630 14:41:47.198419 1571685 main.go:141] libmachine: () Calling .GetVersion
I0630 14:41:47.199183 1571685 main.go:141] libmachine: Using API Version  1
I0630 14:41:47.199212 1571685 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:41:47.199623 1571685 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:41:47.199851 1571685 main.go:141] libmachine: (functional-920930) Calling .GetState
I0630 14:41:47.201743 1571685 host.go:66] Checking if "functional-920930" exists ...
I0630 14:41:47.202122 1571685 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:41:47.202176 1571685 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:41:47.217932 1571685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44491
I0630 14:41:47.218610 1571685 main.go:141] libmachine: () Calling .GetVersion
I0630 14:41:47.219155 1571685 main.go:141] libmachine: Using API Version  1
I0630 14:41:47.219174 1571685 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:41:47.219549 1571685 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:41:47.219708 1571685 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:41:47.219840 1571685 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:41:47.220058 1571685 main.go:141] libmachine: (functional-920930) Calling .GetIP
I0630 14:41:47.223604 1571685 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:41:47.224183 1571685 main.go:141] libmachine: (functional-920930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:bf:47", ip: ""} in network mk-functional-920930: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:03 +0000 UTC Type:0 Mac:52:54:00:41:bf:47 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-920930 Clientid:01:52:54:00:41:bf:47}
I0630 14:41:47.224206 1571685 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined IP address 192.168.39.113 and MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:41:47.225082 1571685 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:41:47.227584 1571685 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdspecific-port3831704777/001 into VM as /mount-9p ...
I0630 14:41:47.229474 1571685 out.go:177]   - Mount type:   9p
I0630 14:41:47.230922 1571685 out.go:177]   - User ID:      docker
I0630 14:41:47.232461 1571685 out.go:177]   - Group ID:     docker
I0630 14:41:47.234407 1571685 out.go:177]   - Version:      9p2000.L
I0630 14:41:47.235876 1571685 out.go:177]   - Message Size: 262144
I0630 14:41:47.237178 1571685 out.go:177]   - Options:      map[]
I0630 14:41:47.238999 1571685 out.go:177]   - Bind Address: 192.168.39.1:46464
I0630 14:41:47.240303 1571685 out.go:177] * Userspace file server: 
I0630 14:41:47.240408 1571685 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0630 14:41:47.240451 1571685 main.go:141] libmachine: (functional-920930) Calling .GetSSHHostname
I0630 14:41:47.241459 1571685 main.go:125] stdlog: ufs.go:27 listen tcp 192.168.39.1:46464: bind: address already in use
I0630 14:41:47.242725 1571685 out.go:177] * Userspace file server is shutdown
I0630 14:41:47.244250 1571685 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:41:47.244595 1571685 main.go:141] libmachine: (functional-920930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:bf:47", ip: ""} in network mk-functional-920930: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:03 +0000 UTC Type:0 Mac:52:54:00:41:bf:47 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-920930 Clientid:01:52:54:00:41:bf:47}
I0630 14:41:47.244626 1571685 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined IP address 192.168.39.113 and MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:41:47.244833 1571685 main.go:141] libmachine: (functional-920930) Calling .GetSSHPort
I0630 14:41:47.245031 1571685 main.go:141] libmachine: (functional-920930) Calling .GetSSHKeyPath
I0630 14:41:47.245225 1571685 main.go:141] libmachine: (functional-920930) Calling .GetSSHUsername
I0630 14:41:47.245440 1571685 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/functional-920930/id_rsa Username:docker}
I0630 14:41:47.325332 1571685 mount.go:180] unmount for /mount-9p ran successfully
I0630 14:41:47.325369 1571685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0630 14:41:47.342642 1571685 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I0630 14:41:47.362078 1571685 out.go:201] 
W0630 14:41:47.363189 1571685 out.go:270] X Exiting due to GUEST_MOUNT: mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p": Process exited with status 32
stdout:

                                                
                                                
stderr:
mount: /mount-9p: mount system call failed: Connection refused.
dmesg(1) may have more information after failed mount system call.

                                                
                                                
X Exiting due to GUEST_MOUNT: mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p": Process exited with status 32
stdout:

                                                
                                                
stderr:
mount: /mount-9p: mount system call failed: Connection refused.
dmesg(1) may have more information after failed mount system call.

                                                
                                                
W0630 14:41:47.363218 1571685 out.go:270] * 
* 
W0630 14:41:47.371946 1571685 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_80ca3367a6047f7f56b5d33ac73d463b2c04684c_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_80ca3367a6047f7f56b5d33ac73d463b2c04684c_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0630 14:41:47.373820 1571685 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (13.01s)

                                                
                                    
x
+
TestPreload (178.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-833225 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-833225 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.398365681s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-833225 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-833225 image pull gcr.io/k8s-minikube/busybox: (3.922613738s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-833225
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-833225: (7.319211797s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-833225 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0630 15:35:20.916943 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-833225 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.498179545s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-833225 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-06-30 15:35:36.426487538 +0000 UTC m=+4679.514368339
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-833225 -n test-preload-833225
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-833225 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-833225 logs -n 25: (1.152846768s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-973445 ssh -n                                                                 | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC | 30 Jun 25 15:21 UTC |
	|         | multinode-973445-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-973445 ssh -n multinode-973445 sudo cat                                       | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC | 30 Jun 25 15:21 UTC |
	|         | /home/docker/cp-test_multinode-973445-m03_multinode-973445.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-973445 cp multinode-973445-m03:/home/docker/cp-test.txt                       | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC | 30 Jun 25 15:21 UTC |
	|         | multinode-973445-m02:/home/docker/cp-test_multinode-973445-m03_multinode-973445-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-973445 ssh -n                                                                 | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC | 30 Jun 25 15:21 UTC |
	|         | multinode-973445-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-973445 ssh -n multinode-973445-m02 sudo cat                                   | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC | 30 Jun 25 15:21 UTC |
	|         | /home/docker/cp-test_multinode-973445-m03_multinode-973445-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-973445 node stop m03                                                          | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC | 30 Jun 25 15:21 UTC |
	| node    | multinode-973445 node start                                                             | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC | 30 Jun 25 15:21 UTC |
	|         | m03 -v=5 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-973445                                                                | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC |                     |
	| stop    | -p multinode-973445                                                                     | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC | 30 Jun 25 15:24 UTC |
	| start   | -p multinode-973445                                                                     | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:24 UTC | 30 Jun 25 15:27 UTC |
	|         | --wait=true -v=5                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-973445                                                                | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:27 UTC |                     |
	| node    | multinode-973445 node delete                                                            | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:27 UTC | 30 Jun 25 15:27 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-973445 stop                                                                   | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:27 UTC | 30 Jun 25 15:30 UTC |
	| start   | -p multinode-973445                                                                     | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:30 UTC | 30 Jun 25 15:31 UTC |
	|         | --wait=true -v=5                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-973445                                                                | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:31 UTC |                     |
	| start   | -p multinode-973445-m02                                                                 | multinode-973445-m02 | jenkins | v1.36.0 | 30 Jun 25 15:31 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-973445-m03                                                                 | multinode-973445-m03 | jenkins | v1.36.0 | 30 Jun 25 15:31 UTC | 30 Jun 25 15:32 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-973445                                                                 | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:32 UTC |                     |
	| delete  | -p multinode-973445-m03                                                                 | multinode-973445-m03 | jenkins | v1.36.0 | 30 Jun 25 15:32 UTC | 30 Jun 25 15:32 UTC |
	| delete  | -p multinode-973445                                                                     | multinode-973445     | jenkins | v1.36.0 | 30 Jun 25 15:32 UTC | 30 Jun 25 15:32 UTC |
	| start   | -p test-preload-833225                                                                  | test-preload-833225  | jenkins | v1.36.0 | 30 Jun 25 15:32 UTC | 30 Jun 25 15:34 UTC |
	|         | --memory=3072                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-833225 image pull                                                          | test-preload-833225  | jenkins | v1.36.0 | 30 Jun 25 15:34 UTC | 30 Jun 25 15:34 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-833225                                                                  | test-preload-833225  | jenkins | v1.36.0 | 30 Jun 25 15:34 UTC | 30 Jun 25 15:34 UTC |
	| start   | -p test-preload-833225                                                                  | test-preload-833225  | jenkins | v1.36.0 | 30 Jun 25 15:34 UTC | 30 Jun 25 15:35 UTC |
	|         | --memory=3072                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-833225 image list                                                          | test-preload-833225  | jenkins | v1.36.0 | 30 Jun 25 15:35 UTC | 30 Jun 25 15:35 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 15:34:25
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 15:34:25.739987 1597492 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:34:25.740236 1597492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:34:25.740243 1597492 out.go:358] Setting ErrFile to fd 2...
	I0630 15:34:25.740247 1597492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:34:25.740442 1597492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:34:25.741032 1597492 out.go:352] Setting JSON to false
	I0630 15:34:25.742092 1597492 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33358,"bootTime":1751264308,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:34:25.742225 1597492 start.go:140] virtualization: kvm guest
	I0630 15:34:25.744756 1597492 out.go:177] * [test-preload-833225] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:34:25.746362 1597492 notify.go:220] Checking for updates...
	I0630 15:34:25.746404 1597492 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:34:25.747899 1597492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:34:25.749396 1597492 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:34:25.750713 1597492 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:34:25.752247 1597492 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:34:25.753960 1597492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:34:25.755869 1597492 config.go:182] Loaded profile config "test-preload-833225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0630 15:34:25.756288 1597492 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:34:25.756361 1597492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:34:25.772355 1597492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0630 15:34:25.772879 1597492 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:34:25.773441 1597492 main.go:141] libmachine: Using API Version  1
	I0630 15:34:25.773468 1597492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:34:25.773860 1597492 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:34:25.774070 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:34:25.776177 1597492 out.go:177] * Kubernetes 1.33.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.33.2
	I0630 15:34:25.777636 1597492 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:34:25.778025 1597492 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:34:25.778098 1597492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:34:25.793548 1597492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0630 15:34:25.794137 1597492 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:34:25.794623 1597492 main.go:141] libmachine: Using API Version  1
	I0630 15:34:25.794640 1597492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:34:25.795004 1597492 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:34:25.795238 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:34:25.834427 1597492 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 15:34:25.835820 1597492 start.go:304] selected driver: kvm2
	I0630 15:34:25.835839 1597492 start.go:908] validating driver "kvm2" against &{Name:test-preload-833225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 C
lusterName:test-preload-833225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:34:25.835971 1597492 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:34:25.836774 1597492 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:34:25.836866 1597492 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:34:25.853338 1597492 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:34:25.853778 1597492 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:34:25.853817 1597492 cni.go:84] Creating CNI manager for ""
	I0630 15:34:25.853867 1597492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:34:25.853927 1597492 start.go:347] cluster config:
	{Name:test-preload-833225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-833225 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:34:25.854027 1597492 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:34:25.855732 1597492 out.go:177] * Starting "test-preload-833225" primary control-plane node in "test-preload-833225" cluster
	I0630 15:34:25.856942 1597492 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0630 15:34:25.955165 1597492 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0630 15:34:25.955217 1597492 cache.go:56] Caching tarball of preloaded images
	I0630 15:34:25.955399 1597492 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0630 15:34:25.957306 1597492 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0630 15:34:25.958836 1597492 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0630 15:34:26.061811 1597492 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0630 15:34:39.199074 1597492 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0630 15:34:39.199171 1597492 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0630 15:34:40.085302 1597492 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0630 15:34:40.085499 1597492 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/config.json ...
	I0630 15:34:40.085738 1597492 start.go:360] acquireMachinesLock for test-preload-833225: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:34:40.085811 1597492 start.go:364] duration metric: took 48.774µs to acquireMachinesLock for "test-preload-833225"
	I0630 15:34:40.085827 1597492 start.go:96] Skipping create...Using existing machine configuration
	I0630 15:34:40.085832 1597492 fix.go:54] fixHost starting: 
	I0630 15:34:40.086101 1597492 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:34:40.086140 1597492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:34:40.102559 1597492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0630 15:34:40.103014 1597492 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:34:40.103532 1597492 main.go:141] libmachine: Using API Version  1
	I0630 15:34:40.103560 1597492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:34:40.103948 1597492 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:34:40.104180 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:34:40.104407 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetState
	I0630 15:34:40.106429 1597492 fix.go:112] recreateIfNeeded on test-preload-833225: state=Stopped err=<nil>
	I0630 15:34:40.106486 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	W0630 15:34:40.106662 1597492 fix.go:138] unexpected machine state, will restart: <nil>
	I0630 15:34:40.109498 1597492 out.go:177] * Restarting existing kvm2 VM for "test-preload-833225" ...
	I0630 15:34:40.111006 1597492 main.go:141] libmachine: (test-preload-833225) Calling .Start
	I0630 15:34:40.111257 1597492 main.go:141] libmachine: (test-preload-833225) starting domain...
	I0630 15:34:40.111280 1597492 main.go:141] libmachine: (test-preload-833225) ensuring networks are active...
	I0630 15:34:40.112294 1597492 main.go:141] libmachine: (test-preload-833225) Ensuring network default is active
	I0630 15:34:40.112907 1597492 main.go:141] libmachine: (test-preload-833225) Ensuring network mk-test-preload-833225 is active
	I0630 15:34:40.113379 1597492 main.go:141] libmachine: (test-preload-833225) getting domain XML...
	I0630 15:34:40.114371 1597492 main.go:141] libmachine: (test-preload-833225) creating domain...
	I0630 15:34:41.433488 1597492 main.go:141] libmachine: (test-preload-833225) waiting for IP...
	I0630 15:34:41.434365 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:41.434791 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:41.434924 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:41.434797 1597575 retry.go:31] will retry after 274.979155ms: waiting for domain to come up
	I0630 15:34:41.711530 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:41.712141 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:41.712180 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:41.712045 1597575 retry.go:31] will retry after 311.782365ms: waiting for domain to come up
	I0630 15:34:42.025883 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:42.026292 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:42.026320 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:42.026263 1597575 retry.go:31] will retry after 422.584667ms: waiting for domain to come up
	I0630 15:34:42.451319 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:42.451913 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:42.451958 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:42.451896 1597575 retry.go:31] will retry after 522.653042ms: waiting for domain to come up
	I0630 15:34:42.977136 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:42.977683 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:42.977716 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:42.977651 1597575 retry.go:31] will retry after 676.471093ms: waiting for domain to come up
	I0630 15:34:43.655883 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:43.656429 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:43.656468 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:43.656381 1597575 retry.go:31] will retry after 816.991961ms: waiting for domain to come up
	I0630 15:34:44.475453 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:44.476011 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:44.476040 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:44.475955 1597575 retry.go:31] will retry after 1.176794263s: waiting for domain to come up
	I0630 15:34:45.655183 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:45.655871 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:45.655906 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:45.655798 1597575 retry.go:31] will retry after 1.235475291s: waiting for domain to come up
	I0630 15:34:46.893615 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:46.894291 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:46.894325 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:46.894225 1597575 retry.go:31] will retry after 1.633035416s: waiting for domain to come up
	I0630 15:34:48.529285 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:48.529754 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:48.529780 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:48.529724 1597575 retry.go:31] will retry after 2.159249204s: waiting for domain to come up
	I0630 15:34:50.690616 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:50.691159 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:50.691201 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:50.691118 1597575 retry.go:31] will retry after 2.458551137s: waiting for domain to come up
	I0630 15:34:53.151843 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:53.152368 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:53.152413 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:53.152324 1597575 retry.go:31] will retry after 3.248929779s: waiting for domain to come up
	I0630 15:34:56.403351 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:56.403778 1597492 main.go:141] libmachine: (test-preload-833225) DBG | unable to find current IP address of domain test-preload-833225 in network mk-test-preload-833225
	I0630 15:34:56.403802 1597492 main.go:141] libmachine: (test-preload-833225) DBG | I0630 15:34:56.403736 1597575 retry.go:31] will retry after 3.227682074s: waiting for domain to come up
	I0630 15:34:59.635237 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.635774 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has current primary IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.635818 1597492 main.go:141] libmachine: (test-preload-833225) found domain IP: 192.168.39.161
	I0630 15:34:59.635845 1597492 main.go:141] libmachine: (test-preload-833225) reserving static IP address...
	I0630 15:34:59.636277 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "test-preload-833225", mac: "52:54:00:cc:3a:e7", ip: "192.168.39.161"} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:34:59.636295 1597492 main.go:141] libmachine: (test-preload-833225) reserved static IP address 192.168.39.161 for domain test-preload-833225
	I0630 15:34:59.636309 1597492 main.go:141] libmachine: (test-preload-833225) DBG | skip adding static IP to network mk-test-preload-833225 - found existing host DHCP lease matching {name: "test-preload-833225", mac: "52:54:00:cc:3a:e7", ip: "192.168.39.161"}
	I0630 15:34:59.636320 1597492 main.go:141] libmachine: (test-preload-833225) DBG | Getting to WaitForSSH function...
	I0630 15:34:59.636329 1597492 main.go:141] libmachine: (test-preload-833225) waiting for SSH...
	I0630 15:34:59.638407 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.638688 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:34:59.638710 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.638848 1597492 main.go:141] libmachine: (test-preload-833225) DBG | Using SSH client type: external
	I0630 15:34:59.638887 1597492 main.go:141] libmachine: (test-preload-833225) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/test-preload-833225/id_rsa (-rw-------)
	I0630 15:34:59.638921 1597492 main.go:141] libmachine: (test-preload-833225) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/test-preload-833225/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:34:59.638935 1597492 main.go:141] libmachine: (test-preload-833225) DBG | About to run SSH command:
	I0630 15:34:59.638967 1597492 main.go:141] libmachine: (test-preload-833225) DBG | exit 0
	I0630 15:34:59.769541 1597492 main.go:141] libmachine: (test-preload-833225) DBG | SSH cmd err, output: <nil>: 
	I0630 15:34:59.769943 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetConfigRaw
	I0630 15:34:59.770576 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetIP
	I0630 15:34:59.773279 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.773626 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:34:59.773653 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.774064 1597492 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/config.json ...
	I0630 15:34:59.774373 1597492 machine.go:93] provisionDockerMachine start ...
	I0630 15:34:59.774404 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:34:59.774697 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:34:59.777623 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.777929 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:34:59.777952 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.778160 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:34:59.778359 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:34:59.778514 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:34:59.778664 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:34:59.778858 1597492 main.go:141] libmachine: Using SSH client type: native
	I0630 15:34:59.779204 1597492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0630 15:34:59.779226 1597492 main.go:141] libmachine: About to run SSH command:
	hostname
	I0630 15:34:59.897942 1597492 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0630 15:34:59.897974 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetMachineName
	I0630 15:34:59.898251 1597492 buildroot.go:166] provisioning hostname "test-preload-833225"
	I0630 15:34:59.898282 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetMachineName
	I0630 15:34:59.898491 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:34:59.901718 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.902223 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:34:59.902252 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:34:59.902433 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:34:59.902957 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:34:59.903163 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:34:59.903416 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:34:59.903599 1597492 main.go:141] libmachine: Using SSH client type: native
	I0630 15:34:59.903812 1597492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0630 15:34:59.903825 1597492 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-833225 && echo "test-preload-833225" | sudo tee /etc/hostname
	I0630 15:35:00.038374 1597492 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-833225
	
	I0630 15:35:00.038405 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:00.041473 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.041820 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:00.041851 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.042137 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:35:00.042335 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:00.042486 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:00.042602 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:35:00.042721 1597492 main.go:141] libmachine: Using SSH client type: native
	I0630 15:35:00.042923 1597492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0630 15:35:00.042941 1597492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-833225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-833225/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-833225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:35:00.171730 1597492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:35:00.171771 1597492 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:35:00.171819 1597492 buildroot.go:174] setting up certificates
	I0630 15:35:00.171830 1597492 provision.go:84] configureAuth start
	I0630 15:35:00.171840 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetMachineName
	I0630 15:35:00.172177 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetIP
	I0630 15:35:00.175154 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.175634 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:00.175667 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.175854 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:00.178440 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.178837 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:00.178867 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.179050 1597492 provision.go:143] copyHostCerts
	I0630 15:35:00.179139 1597492 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:35:00.179155 1597492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:35:00.179251 1597492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:35:00.179401 1597492 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:35:00.179415 1597492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:35:00.179459 1597492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:35:00.179543 1597492 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:35:00.179553 1597492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:35:00.179588 1597492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:35:00.179663 1597492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.test-preload-833225 san=[127.0.0.1 192.168.39.161 localhost minikube test-preload-833225]
	I0630 15:35:00.598879 1597492 provision.go:177] copyRemoteCerts
	I0630 15:35:00.598972 1597492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:35:00.599003 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:00.602662 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.603250 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:00.603308 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.603623 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:35:00.603918 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:00.604197 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:35:00.604460 1597492 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/test-preload-833225/id_rsa Username:docker}
	I0630 15:35:00.693495 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:35:00.722465 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0630 15:35:00.752112 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:35:00.782067 1597492 provision.go:87] duration metric: took 610.21833ms to configureAuth
	I0630 15:35:00.782108 1597492 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:35:00.782321 1597492 config.go:182] Loaded profile config "test-preload-833225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0630 15:35:00.782422 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:00.785654 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.785995 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:00.786034 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:00.786226 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:35:00.786445 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:00.786634 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:00.786785 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:35:00.786974 1597492 main.go:141] libmachine: Using SSH client type: native
	I0630 15:35:00.787247 1597492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0630 15:35:00.787265 1597492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:35:01.034267 1597492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:35:01.034305 1597492 machine.go:96] duration metric: took 1.259913518s to provisionDockerMachine
	I0630 15:35:01.034318 1597492 start.go:293] postStartSetup for "test-preload-833225" (driver="kvm2")
	I0630 15:35:01.034331 1597492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:35:01.034356 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:35:01.035015 1597492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:35:01.035049 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:01.039406 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.039803 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:01.039830 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.040036 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:35:01.040273 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:01.040487 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:35:01.040612 1597492 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/test-preload-833225/id_rsa Username:docker}
	I0630 15:35:01.130459 1597492 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:35:01.135221 1597492 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:35:01.135259 1597492 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:35:01.135344 1597492 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:35:01.135449 1597492 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:35:01.135539 1597492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:35:01.146734 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:35:01.177250 1597492 start.go:296] duration metric: took 142.915313ms for postStartSetup
	I0630 15:35:01.177322 1597492 fix.go:56] duration metric: took 21.091488923s for fixHost
	I0630 15:35:01.177348 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:01.180448 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.180877 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:01.180910 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.181036 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:35:01.181240 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:01.181417 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:01.181613 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:35:01.181772 1597492 main.go:141] libmachine: Using SSH client type: native
	I0630 15:35:01.181998 1597492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0630 15:35:01.182017 1597492 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:35:01.306445 1597492 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751297701.262645325
	
	I0630 15:35:01.306472 1597492 fix.go:216] guest clock: 1751297701.262645325
	I0630 15:35:01.306482 1597492 fix.go:229] Guest: 2025-06-30 15:35:01.262645325 +0000 UTC Remote: 2025-06-30 15:35:01.177327304 +0000 UTC m=+35.477813257 (delta=85.318021ms)
	I0630 15:35:01.306540 1597492 fix.go:200] guest clock delta is within tolerance: 85.318021ms
	I0630 15:35:01.306548 1597492 start.go:83] releasing machines lock for "test-preload-833225", held for 21.220726084s
	I0630 15:35:01.306579 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:35:01.306886 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetIP
	I0630 15:35:01.309781 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.310186 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:01.310212 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.310361 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:35:01.311117 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:35:01.311423 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:35:01.311625 1597492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:35:01.311677 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:01.311791 1597492 ssh_runner.go:195] Run: cat /version.json
	I0630 15:35:01.311823 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:01.314819 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.315033 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.315251 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:01.315276 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.315512 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:35:01.315564 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:01.315595 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:01.315697 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:01.315763 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:35:01.315834 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:35:01.315897 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:01.316037 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:35:01.316041 1597492 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/test-preload-833225/id_rsa Username:docker}
	I0630 15:35:01.316191 1597492 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/test-preload-833225/id_rsa Username:docker}
	I0630 15:35:01.436501 1597492 ssh_runner.go:195] Run: systemctl --version
	I0630 15:35:01.442767 1597492 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:35:01.588037 1597492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:35:01.594967 1597492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:35:01.595083 1597492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:35:01.613861 1597492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:35:01.613890 1597492 start.go:495] detecting cgroup driver to use...
	I0630 15:35:01.613954 1597492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:35:01.632350 1597492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:35:01.649728 1597492 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:35:01.650313 1597492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:35:01.667071 1597492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:35:01.684154 1597492 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:35:01.821235 1597492 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:35:01.973329 1597492 docker.go:246] disabling docker service ...
	I0630 15:35:01.973431 1597492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:35:01.988986 1597492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:35:02.004686 1597492 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:35:02.190104 1597492 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:35:02.328885 1597492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:35:02.343992 1597492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:35:02.365954 1597492 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0630 15:35:02.366020 1597492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:35:02.377155 1597492 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:35:02.377227 1597492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:35:02.388667 1597492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:35:02.400505 1597492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:35:02.412380 1597492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:35:02.424840 1597492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:35:02.436348 1597492 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:35:02.456303 1597492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:35:02.469240 1597492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:35:02.479261 1597492 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:35:02.479326 1597492 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:35:02.494173 1597492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:35:02.505119 1597492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:35:02.644683 1597492 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:35:02.754180 1597492 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:35:02.754259 1597492 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:35:02.759092 1597492 start.go:563] Will wait 60s for crictl version
	I0630 15:35:02.759160 1597492 ssh_runner.go:195] Run: which crictl
	I0630 15:35:02.762947 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:35:02.806369 1597492 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:35:02.806486 1597492 ssh_runner.go:195] Run: crio --version
	I0630 15:35:02.834893 1597492 ssh_runner.go:195] Run: crio --version
	I0630 15:35:02.866923 1597492 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0630 15:35:02.868366 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetIP
	I0630 15:35:02.871796 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:02.872208 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:02.872235 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:02.872500 1597492 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 15:35:02.876628 1597492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:35:02.890217 1597492 kubeadm.go:875] updating cluster {Name:test-preload-833225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test
-preload-833225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:35:02.890346 1597492 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0630 15:35:02.890391 1597492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:35:02.929927 1597492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0630 15:35:02.930013 1597492 ssh_runner.go:195] Run: which lz4
	I0630 15:35:02.933973 1597492 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:35:02.938241 1597492 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:35:02.938280 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0630 15:35:04.625802 1597492 crio.go:462] duration metric: took 1.691860517s to copy over tarball
	I0630 15:35:04.625911 1597492 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:35:06.751222 1597492 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125270856s)
	I0630 15:35:06.751263 1597492 crio.go:469] duration metric: took 2.125414266s to extract the tarball
	I0630 15:35:06.751275 1597492 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:35:06.792705 1597492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:35:06.834920 1597492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0630 15:35:06.834951 1597492 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0630 15:35:06.835031 1597492 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:35:06.835117 1597492 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0630 15:35:06.835157 1597492 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0630 15:35:06.835166 1597492 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0630 15:35:06.835178 1597492 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0630 15:35:06.835131 1597492 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0630 15:35:06.835205 1597492 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0630 15:35:06.835143 1597492 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0630 15:35:06.836556 1597492 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0630 15:35:06.836575 1597492 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0630 15:35:06.836592 1597492 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0630 15:35:06.836610 1597492 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0630 15:35:06.836613 1597492 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0630 15:35:06.836580 1597492 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0630 15:35:06.836559 1597492 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:35:06.836649 1597492 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0630 15:35:07.036186 1597492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0630 15:35:07.053080 1597492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0630 15:35:07.065211 1597492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0630 15:35:07.069580 1597492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0630 15:35:07.086052 1597492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0630 15:35:07.103732 1597492 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0630 15:35:07.103788 1597492 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0630 15:35:07.103839 1597492 ssh_runner.go:195] Run: which crictl
	I0630 15:35:07.108068 1597492 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0630 15:35:07.108125 1597492 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0630 15:35:07.108178 1597492 ssh_runner.go:195] Run: which crictl
	I0630 15:35:07.111923 1597492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0630 15:35:07.138929 1597492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0630 15:35:07.184123 1597492 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0630 15:35:07.184185 1597492 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0630 15:35:07.184228 1597492 ssh_runner.go:195] Run: which crictl
	I0630 15:35:07.186183 1597492 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0630 15:35:07.186228 1597492 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0630 15:35:07.186264 1597492 ssh_runner.go:195] Run: which crictl
	I0630 15:35:07.224844 1597492 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0630 15:35:07.224880 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0630 15:35:07.224897 1597492 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0630 15:35:07.224935 1597492 ssh_runner.go:195] Run: which crictl
	I0630 15:35:07.225049 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0630 15:35:07.232088 1597492 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0630 15:35:07.232177 1597492 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0630 15:35:07.232247 1597492 ssh_runner.go:195] Run: which crictl
	I0630 15:35:07.267939 1597492 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0630 15:35:07.268011 1597492 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0630 15:35:07.268038 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0630 15:35:07.268073 1597492 ssh_runner.go:195] Run: which crictl
	I0630 15:35:07.268121 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0630 15:35:07.268185 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0630 15:35:07.316129 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0630 15:35:07.316157 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0630 15:35:07.316246 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0630 15:35:07.346110 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0630 15:35:07.346127 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0630 15:35:07.424894 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0630 15:35:07.424968 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0630 15:35:07.479597 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0630 15:35:07.479832 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0630 15:35:07.507312 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0630 15:35:07.507335 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0630 15:35:07.507443 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0630 15:35:07.610743 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0630 15:35:07.610762 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0630 15:35:07.615375 1597492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0630 15:35:07.615482 1597492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0630 15:35:07.654628 1597492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0630 15:35:07.654740 1597492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0630 15:35:07.695647 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0630 15:35:07.695685 1597492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0630 15:35:07.695757 1597492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0630 15:35:07.695874 1597492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0630 15:35:07.708091 1597492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0630 15:35:07.708239 1597492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0630 15:35:07.727835 1597492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0630 15:35:07.727879 1597492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0630 15:35:07.727900 1597492 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0630 15:35:07.727922 1597492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0630 15:35:07.727954 1597492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0630 15:35:07.727961 1597492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0630 15:35:07.799693 1597492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0630 15:35:07.799775 1597492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0630 15:35:07.799821 1597492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0630 15:35:07.799857 1597492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0630 15:35:07.799860 1597492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0630 15:35:07.799869 1597492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0630 15:35:08.160647 1597492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:35:10.328780 1597492 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (2.600787294s)
	I0630 15:35:10.328840 1597492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0630 15:35:10.328871 1597492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.60088307s)
	I0630 15:35:10.328897 1597492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0630 15:35:10.328914 1597492 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.528913151s)
	I0630 15:35:10.328927 1597492 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0630 15:35:10.328933 1597492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0630 15:35:10.328984 1597492 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.529074473s)
	I0630 15:35:10.328998 1597492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0630 15:35:10.329009 1597492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0630 15:35:10.329027 1597492 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.168340586s)
	I0630 15:35:11.186112 1597492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0630 15:35:11.186185 1597492 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0630 15:35:11.186289 1597492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0630 15:35:11.328692 1597492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0630 15:35:11.328752 1597492 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0630 15:35:11.328820 1597492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0630 15:35:11.776009 1597492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0630 15:35:11.776113 1597492 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0630 15:35:11.776207 1597492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0630 15:35:14.233647 1597492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.457406797s)
	I0630 15:35:14.233697 1597492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0630 15:35:14.233747 1597492 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0630 15:35:14.233810 1597492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0630 15:35:14.984672 1597492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0630 15:35:14.984724 1597492 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0630 15:35:14.984788 1597492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0630 15:35:15.739395 1597492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0630 15:35:15.739448 1597492 cache_images.go:123] Successfully loaded all cached images
	I0630 15:35:15.739453 1597492 cache_images.go:92] duration metric: took 8.904490564s to LoadCachedImages
	I0630 15:35:15.739479 1597492 kubeadm.go:926] updating node { 192.168.39.161 8443 v1.24.4 crio true true} ...
	I0630 15:35:15.739614 1597492 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-833225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-833225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 15:35:15.739690 1597492 ssh_runner.go:195] Run: crio config
	I0630 15:35:15.783782 1597492 cni.go:84] Creating CNI manager for ""
	I0630 15:35:15.783807 1597492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:35:15.783817 1597492 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:35:15.783835 1597492 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.161 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-833225 NodeName:test-preload-833225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:35:15.783980 1597492 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-833225"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:35:15.784046 1597492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0630 15:35:15.795474 1597492 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:35:15.795575 1597492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:35:15.806624 1597492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0630 15:35:15.825544 1597492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:35:15.844356 1597492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0630 15:35:15.864440 1597492 ssh_runner.go:195] Run: grep 192.168.39.161	control-plane.minikube.internal$ /etc/hosts
	I0630 15:35:15.868550 1597492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:35:15.881664 1597492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:35:16.014912 1597492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:35:16.046197 1597492 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225 for IP: 192.168.39.161
	I0630 15:35:16.046254 1597492 certs.go:194] generating shared ca certs ...
	I0630 15:35:16.046278 1597492 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:35:16.046491 1597492 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:35:16.046551 1597492 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:35:16.046564 1597492 certs.go:256] generating profile certs ...
	I0630 15:35:16.046676 1597492 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/client.key
	I0630 15:35:16.046754 1597492 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/apiserver.key.2ba7d86c
	I0630 15:35:16.046834 1597492 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/proxy-client.key
	I0630 15:35:16.047002 1597492 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:35:16.047046 1597492 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:35:16.047060 1597492 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:35:16.047111 1597492 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:35:16.047145 1597492 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:35:16.047182 1597492 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:35:16.047244 1597492 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:35:16.048233 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:35:16.088989 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:35:16.128263 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:35:16.160826 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:35:16.191811 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0630 15:35:16.221614 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 15:35:16.251021 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:35:16.281047 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 15:35:16.310652 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:35:16.341372 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:35:16.371082 1597492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:35:16.399992 1597492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:35:16.421555 1597492 ssh_runner.go:195] Run: openssl version
	I0630 15:35:16.428298 1597492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:35:16.440628 1597492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:35:16.445633 1597492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:35:16.445705 1597492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:35:16.453320 1597492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:35:16.466358 1597492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:35:16.479718 1597492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:35:16.485491 1597492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:35:16.485577 1597492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:35:16.493004 1597492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:35:16.505686 1597492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:35:16.518453 1597492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:35:16.523566 1597492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:35:16.523651 1597492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:35:16.530956 1597492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:35:16.544015 1597492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:35:16.549376 1597492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0630 15:35:16.557266 1597492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0630 15:35:16.564743 1597492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0630 15:35:16.572599 1597492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0630 15:35:16.579967 1597492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0630 15:35:16.587758 1597492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0630 15:35:16.595683 1597492 kubeadm.go:392] StartCluster: {Name:test-preload-833225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-pr
eload-833225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:35:16.595810 1597492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:35:16.595893 1597492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:35:16.633774 1597492 cri.go:89] found id: ""
	I0630 15:35:16.633870 1597492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:35:16.645535 1597492 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0630 15:35:16.645653 1597492 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0630 15:35:16.645878 1597492 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0630 15:35:16.657667 1597492 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:35:16.658283 1597492 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-833225" does not appear in /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:35:16.658581 1597492 kubeconfig.go:62] /home/jenkins/minikube-integration/20991-1550299/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-833225" cluster setting kubeconfig missing "test-preload-833225" context setting]
	I0630 15:35:16.659161 1597492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:35:16.659873 1597492 kapi.go:59] client config for test-preload-833225: &rest.Config{Host:"https://192.168.39.161:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/client.crt", KeyFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/client.key", CAFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x258ff00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0630 15:35:16.660626 1597492 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0630 15:35:16.660668 1597492 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0630 15:35:16.660675 1597492 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0630 15:35:16.660680 1597492 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0630 15:35:16.660685 1597492 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0630 15:35:16.661116 1597492 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0630 15:35:16.673041 1597492 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.161
	I0630 15:35:16.673088 1597492 kubeadm.go:1152] stopping kube-system containers ...
	I0630 15:35:16.673106 1597492 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0630 15:35:16.673190 1597492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:35:16.710442 1597492 cri.go:89] found id: ""
	I0630 15:35:16.710567 1597492 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0630 15:35:16.728534 1597492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:35:16.740830 1597492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:35:16.740859 1597492 kubeadm.go:157] found existing configuration files:
	
	I0630 15:35:16.740927 1597492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:35:16.751723 1597492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:35:16.751807 1597492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:35:16.763373 1597492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:35:16.774779 1597492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:35:16.774875 1597492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:35:16.785811 1597492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:35:16.796061 1597492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:35:16.796173 1597492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:35:16.807721 1597492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:35:16.818413 1597492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:35:16.818502 1597492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:35:16.829769 1597492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:35:16.841177 1597492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:35:16.900542 1597492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:35:17.721884 1597492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:35:18.017334 1597492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:35:18.084027 1597492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:35:18.158805 1597492 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:35:18.158939 1597492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:35:18.659352 1597492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:35:19.159218 1597492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:35:19.184880 1597492 api_server.go:72] duration metric: took 1.026076043s to wait for apiserver process to appear ...
	I0630 15:35:19.184921 1597492 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:35:19.184950 1597492 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0630 15:35:19.185609 1597492 api_server.go:269] stopped: https://192.168.39.161:8443/healthz: Get "https://192.168.39.161:8443/healthz": dial tcp 192.168.39.161:8443: connect: connection refused
	I0630 15:35:19.685314 1597492 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0630 15:35:23.456450 1597492 api_server.go:279] https://192.168.39.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0630 15:35:23.456492 1597492 api_server.go:103] status: https://192.168.39.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0630 15:35:23.456515 1597492 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0630 15:35:23.481347 1597492 api_server.go:279] https://192.168.39.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0630 15:35:23.481451 1597492 api_server.go:103] status: https://192.168.39.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0630 15:35:23.685944 1597492 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0630 15:35:23.692621 1597492 api_server.go:279] https://192.168.39.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0630 15:35:23.692669 1597492 api_server.go:103] status: https://192.168.39.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0630 15:35:24.185332 1597492 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0630 15:35:24.193615 1597492 api_server.go:279] https://192.168.39.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0630 15:35:24.193653 1597492 api_server.go:103] status: https://192.168.39.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0630 15:35:24.685347 1597492 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0630 15:35:24.691281 1597492 api_server.go:279] https://192.168.39.161:8443/healthz returned 200:
	ok
	I0630 15:35:24.698535 1597492 api_server.go:141] control plane version: v1.24.4
	I0630 15:35:24.698570 1597492 api_server.go:131] duration metric: took 5.513641134s to wait for apiserver health ...
	I0630 15:35:24.698580 1597492 cni.go:84] Creating CNI manager for ""
	I0630 15:35:24.698587 1597492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:35:24.700966 1597492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 15:35:24.702674 1597492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 15:35:24.719879 1597492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 15:35:24.748345 1597492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:35:24.753018 1597492 system_pods.go:59] 7 kube-system pods found
	I0630 15:35:24.753068 1597492 system_pods.go:61] "coredns-6d4b75cb6d-w868v" [f6fdd454-a457-45e6-913e-efbcd6442606] Running
	I0630 15:35:24.753461 1597492 system_pods.go:61] "etcd-test-preload-833225" [c8bf634f-972c-45a2-b898-f568fe82562b] Running
	I0630 15:35:24.753482 1597492 system_pods.go:61] "kube-apiserver-test-preload-833225" [c3153c88-0693-4a9a-9d27-05689bd4c8d6] Running
	I0630 15:35:24.753491 1597492 system_pods.go:61] "kube-controller-manager-test-preload-833225" [127c942d-a3c4-477a-94b1-f9f8a65586d0] Running
	I0630 15:35:24.753507 1597492 system_pods.go:61] "kube-proxy-jqwrl" [63f3d7c6-caa0-4767-8ba7-58fa55ad3603] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0630 15:35:24.753521 1597492 system_pods.go:61] "kube-scheduler-test-preload-833225" [ecad2902-f0da-43d4-ac30-c68fe6b7e2b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:35:24.753548 1597492 system_pods.go:61] "storage-provisioner" [5c402b93-cc4a-49a7-af70-f77df9c53852] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:35:24.753558 1597492 system_pods.go:74] duration metric: took 5.168436ms to wait for pod list to return data ...
	I0630 15:35:24.753569 1597492 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:35:24.756035 1597492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:35:24.756066 1597492 node_conditions.go:123] node cpu capacity is 2
	I0630 15:35:24.756080 1597492 node_conditions.go:105] duration metric: took 2.502183ms to run NodePressure ...
	I0630 15:35:24.756128 1597492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:35:25.061450 1597492 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0630 15:35:25.067386 1597492 kubeadm.go:735] kubelet initialised
	I0630 15:35:25.067421 1597492 kubeadm.go:736] duration metric: took 5.931902ms waiting for restarted kubelet to initialise ...
	I0630 15:35:25.067445 1597492 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:35:25.086541 1597492 ops.go:34] apiserver oom_adj: -16
	I0630 15:35:25.086576 1597492 kubeadm.go:593] duration metric: took 8.440913721s to restartPrimaryControlPlane
	I0630 15:35:25.086604 1597492 kubeadm.go:394] duration metric: took 8.490918806s to StartCluster
	I0630 15:35:25.086632 1597492 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:35:25.086730 1597492 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:35:25.087762 1597492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:35:25.088093 1597492 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:35:25.088150 1597492 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:35:25.088256 1597492 addons.go:69] Setting storage-provisioner=true in profile "test-preload-833225"
	I0630 15:35:25.088292 1597492 addons.go:69] Setting default-storageclass=true in profile "test-preload-833225"
	I0630 15:35:25.088341 1597492 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-833225"
	I0630 15:35:25.088342 1597492 config.go:182] Loaded profile config "test-preload-833225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0630 15:35:25.088298 1597492 addons.go:238] Setting addon storage-provisioner=true in "test-preload-833225"
	W0630 15:35:25.088409 1597492 addons.go:247] addon storage-provisioner should already be in state true
	I0630 15:35:25.088439 1597492 host.go:66] Checking if "test-preload-833225" exists ...
	I0630 15:35:25.088818 1597492 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:35:25.088828 1597492 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:35:25.088871 1597492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:35:25.088975 1597492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:35:25.091013 1597492 out.go:177] * Verifying Kubernetes components...
	I0630 15:35:25.092823 1597492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:35:25.106728 1597492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0630 15:35:25.106803 1597492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I0630 15:35:25.107382 1597492 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:35:25.107505 1597492 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:35:25.108028 1597492 main.go:141] libmachine: Using API Version  1
	I0630 15:35:25.108063 1597492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:35:25.108154 1597492 main.go:141] libmachine: Using API Version  1
	I0630 15:35:25.108181 1597492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:35:25.108452 1597492 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:35:25.108588 1597492 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:35:25.108800 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetState
	I0630 15:35:25.109319 1597492 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:35:25.109375 1597492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:35:25.111674 1597492 kapi.go:59] client config for test-preload-833225: &rest.Config{Host:"https://192.168.39.161:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/client.crt", KeyFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/client.key", CAFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x258ff00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0630 15:35:25.112087 1597492 addons.go:238] Setting addon default-storageclass=true in "test-preload-833225"
	W0630 15:35:25.112114 1597492 addons.go:247] addon default-storageclass should already be in state true
	I0630 15:35:25.112159 1597492 host.go:66] Checking if "test-preload-833225" exists ...
	I0630 15:35:25.112552 1597492 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:35:25.112796 1597492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:35:25.128044 1597492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I0630 15:35:25.128608 1597492 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:35:25.129243 1597492 main.go:141] libmachine: Using API Version  1
	I0630 15:35:25.129272 1597492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:35:25.129533 1597492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42383
	I0630 15:35:25.129750 1597492 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:35:25.129962 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetState
	I0630 15:35:25.130187 1597492 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:35:25.130730 1597492 main.go:141] libmachine: Using API Version  1
	I0630 15:35:25.130759 1597492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:35:25.131154 1597492 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:35:25.131710 1597492 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:35:25.131755 1597492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:35:25.131937 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:35:25.134625 1597492 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:35:25.136456 1597492 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:35:25.136493 1597492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 15:35:25.136524 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:25.140759 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:25.141361 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:25.141388 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:25.141674 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:35:25.141909 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:25.142135 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:35:25.142298 1597492 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/test-preload-833225/id_rsa Username:docker}
	I0630 15:35:25.165916 1597492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40631
	I0630 15:35:25.166559 1597492 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:35:25.167126 1597492 main.go:141] libmachine: Using API Version  1
	I0630 15:35:25.167146 1597492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:35:25.167516 1597492 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:35:25.167767 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetState
	I0630 15:35:25.169933 1597492 main.go:141] libmachine: (test-preload-833225) Calling .DriverName
	I0630 15:35:25.170228 1597492 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 15:35:25.170255 1597492 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 15:35:25.170273 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHHostname
	I0630 15:35:25.173358 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:25.173892 1597492 main.go:141] libmachine: (test-preload-833225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:e7", ip: ""} in network mk-test-preload-833225: {Iface:virbr1 ExpiryTime:2025-06-30 16:34:51 +0000 UTC Type:0 Mac:52:54:00:cc:3a:e7 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:test-preload-833225 Clientid:01:52:54:00:cc:3a:e7}
	I0630 15:35:25.173922 1597492 main.go:141] libmachine: (test-preload-833225) DBG | domain test-preload-833225 has defined IP address 192.168.39.161 and MAC address 52:54:00:cc:3a:e7 in network mk-test-preload-833225
	I0630 15:35:25.174143 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHPort
	I0630 15:35:25.174362 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHKeyPath
	I0630 15:35:25.174534 1597492 main.go:141] libmachine: (test-preload-833225) Calling .GetSSHUsername
	I0630 15:35:25.174690 1597492 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/test-preload-833225/id_rsa Username:docker}
	I0630 15:35:25.382637 1597492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:35:25.424033 1597492 node_ready.go:35] waiting up to 6m0s for node "test-preload-833225" to be "Ready" ...
	I0630 15:35:25.596731 1597492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:35:25.613009 1597492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 15:35:26.609117 1597492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.012320504s)
	I0630 15:35:26.609189 1597492 main.go:141] libmachine: Making call to close driver server
	I0630 15:35:26.609206 1597492 main.go:141] libmachine: (test-preload-833225) Calling .Close
	I0630 15:35:26.609245 1597492 main.go:141] libmachine: Making call to close driver server
	I0630 15:35:26.609280 1597492 main.go:141] libmachine: (test-preload-833225) Calling .Close
	I0630 15:35:26.609623 1597492 main.go:141] libmachine: (test-preload-833225) DBG | Closing plugin on server side
	I0630 15:35:26.609673 1597492 main.go:141] libmachine: (test-preload-833225) DBG | Closing plugin on server side
	I0630 15:35:26.609719 1597492 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:35:26.609728 1597492 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:35:26.609738 1597492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:35:26.609746 1597492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:35:26.609758 1597492 main.go:141] libmachine: Making call to close driver server
	I0630 15:35:26.609762 1597492 main.go:141] libmachine: Making call to close driver server
	I0630 15:35:26.609767 1597492 main.go:141] libmachine: (test-preload-833225) Calling .Close
	I0630 15:35:26.609772 1597492 main.go:141] libmachine: (test-preload-833225) Calling .Close
	I0630 15:35:26.610010 1597492 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:35:26.610015 1597492 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:35:26.610041 1597492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:35:26.610047 1597492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:35:26.610040 1597492 main.go:141] libmachine: (test-preload-833225) DBG | Closing plugin on server side
	I0630 15:35:26.615605 1597492 main.go:141] libmachine: Making call to close driver server
	I0630 15:35:26.615625 1597492 main.go:141] libmachine: (test-preload-833225) Calling .Close
	I0630 15:35:26.615995 1597492 main.go:141] libmachine: (test-preload-833225) DBG | Closing plugin on server side
	I0630 15:35:26.616020 1597492 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:35:26.616032 1597492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:35:26.618001 1597492 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0630 15:35:26.619219 1597492 addons.go:514] duration metric: took 1.53107996s for enable addons: enabled=[storage-provisioner default-storageclass]
	W0630 15:35:27.427908 1597492 node_ready.go:57] node "test-preload-833225" has "Ready":"False" status (will retry)
	W0630 15:35:29.427968 1597492 node_ready.go:57] node "test-preload-833225" has "Ready":"False" status (will retry)
	W0630 15:35:31.428935 1597492 node_ready.go:57] node "test-preload-833225" has "Ready":"False" status (will retry)
	W0630 15:35:33.927359 1597492 node_ready.go:57] node "test-preload-833225" has "Ready":"False" status (will retry)
	I0630 15:35:34.428290 1597492 node_ready.go:49] node "test-preload-833225" is "Ready"
	I0630 15:35:34.428322 1597492 node_ready.go:38] duration metric: took 9.004246059s for node "test-preload-833225" to be "Ready" ...
	I0630 15:35:34.428336 1597492 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:35:34.428398 1597492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:35:34.447289 1597492 api_server.go:72] duration metric: took 9.359148451s to wait for apiserver process to appear ...
	I0630 15:35:34.447317 1597492 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:35:34.447350 1597492 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0630 15:35:34.452563 1597492 api_server.go:279] https://192.168.39.161:8443/healthz returned 200:
	ok
	I0630 15:35:34.453617 1597492 api_server.go:141] control plane version: v1.24.4
	I0630 15:35:34.453642 1597492 api_server.go:131] duration metric: took 6.318941ms to wait for apiserver health ...
	I0630 15:35:34.453651 1597492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:35:34.456886 1597492 system_pods.go:59] 7 kube-system pods found
	I0630 15:35:34.456927 1597492 system_pods.go:61] "coredns-6d4b75cb6d-w868v" [f6fdd454-a457-45e6-913e-efbcd6442606] Running
	I0630 15:35:34.456932 1597492 system_pods.go:61] "etcd-test-preload-833225" [c8bf634f-972c-45a2-b898-f568fe82562b] Running
	I0630 15:35:34.456935 1597492 system_pods.go:61] "kube-apiserver-test-preload-833225" [c3153c88-0693-4a9a-9d27-05689bd4c8d6] Running
	I0630 15:35:34.456939 1597492 system_pods.go:61] "kube-controller-manager-test-preload-833225" [127c942d-a3c4-477a-94b1-f9f8a65586d0] Running
	I0630 15:35:34.456942 1597492 system_pods.go:61] "kube-proxy-jqwrl" [63f3d7c6-caa0-4767-8ba7-58fa55ad3603] Running
	I0630 15:35:34.456945 1597492 system_pods.go:61] "kube-scheduler-test-preload-833225" [ecad2902-f0da-43d4-ac30-c68fe6b7e2b9] Running
	I0630 15:35:34.456949 1597492 system_pods.go:61] "storage-provisioner" [5c402b93-cc4a-49a7-af70-f77df9c53852] Running
	I0630 15:35:34.456954 1597492 system_pods.go:74] duration metric: took 3.298277ms to wait for pod list to return data ...
	I0630 15:35:34.456962 1597492 default_sa.go:34] waiting for default service account to be created ...
	I0630 15:35:34.459569 1597492 default_sa.go:45] found service account: "default"
	I0630 15:35:34.459595 1597492 default_sa.go:55] duration metric: took 2.627029ms for default service account to be created ...
	I0630 15:35:34.459606 1597492 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 15:35:34.462628 1597492 system_pods.go:86] 7 kube-system pods found
	I0630 15:35:34.462662 1597492 system_pods.go:89] "coredns-6d4b75cb6d-w868v" [f6fdd454-a457-45e6-913e-efbcd6442606] Running
	I0630 15:35:34.462668 1597492 system_pods.go:89] "etcd-test-preload-833225" [c8bf634f-972c-45a2-b898-f568fe82562b] Running
	I0630 15:35:34.462671 1597492 system_pods.go:89] "kube-apiserver-test-preload-833225" [c3153c88-0693-4a9a-9d27-05689bd4c8d6] Running
	I0630 15:35:34.462679 1597492 system_pods.go:89] "kube-controller-manager-test-preload-833225" [127c942d-a3c4-477a-94b1-f9f8a65586d0] Running
	I0630 15:35:34.462683 1597492 system_pods.go:89] "kube-proxy-jqwrl" [63f3d7c6-caa0-4767-8ba7-58fa55ad3603] Running
	I0630 15:35:34.462686 1597492 system_pods.go:89] "kube-scheduler-test-preload-833225" [ecad2902-f0da-43d4-ac30-c68fe6b7e2b9] Running
	I0630 15:35:34.462689 1597492 system_pods.go:89] "storage-provisioner" [5c402b93-cc4a-49a7-af70-f77df9c53852] Running
	I0630 15:35:34.462696 1597492 system_pods.go:126] duration metric: took 3.084752ms to wait for k8s-apps to be running ...
	I0630 15:35:34.462706 1597492 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 15:35:34.462752 1597492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:35:34.479556 1597492 system_svc.go:56] duration metric: took 16.836969ms WaitForService to wait for kubelet
	I0630 15:35:34.479602 1597492 kubeadm.go:578] duration metric: took 9.391467633s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:35:34.479627 1597492 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:35:34.482120 1597492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:35:34.482150 1597492 node_conditions.go:123] node cpu capacity is 2
	I0630 15:35:34.482163 1597492 node_conditions.go:105] duration metric: took 2.530537ms to run NodePressure ...
	I0630 15:35:34.482177 1597492 start.go:241] waiting for startup goroutines ...
	I0630 15:35:34.482189 1597492 start.go:246] waiting for cluster config update ...
	I0630 15:35:34.482200 1597492 start.go:255] writing updated cluster config ...
	I0630 15:35:34.482475 1597492 ssh_runner.go:195] Run: rm -f paused
	I0630 15:35:34.487204 1597492 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:35:34.487913 1597492 kapi.go:59] client config for test-preload-833225: &rest.Config{Host:"https://192.168.39.161:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/client.crt", KeyFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/test-preload-833225/client.key", CAFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x258ff00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0630 15:35:34.491334 1597492 pod_ready.go:83] waiting for pod "coredns-6d4b75cb6d-w868v" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:34.495471 1597492 pod_ready.go:94] pod "coredns-6d4b75cb6d-w868v" is "Ready"
	I0630 15:35:34.495502 1597492 pod_ready.go:86] duration metric: took 4.144268ms for pod "coredns-6d4b75cb6d-w868v" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:34.497872 1597492 pod_ready.go:83] waiting for pod "etcd-test-preload-833225" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:34.501561 1597492 pod_ready.go:94] pod "etcd-test-preload-833225" is "Ready"
	I0630 15:35:34.501588 1597492 pod_ready.go:86] duration metric: took 3.687183ms for pod "etcd-test-preload-833225" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:34.503777 1597492 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-833225" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:34.507483 1597492 pod_ready.go:94] pod "kube-apiserver-test-preload-833225" is "Ready"
	I0630 15:35:34.507598 1597492 pod_ready.go:86] duration metric: took 3.781512ms for pod "kube-apiserver-test-preload-833225" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:34.510049 1597492 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-833225" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:34.891697 1597492 pod_ready.go:94] pod "kube-controller-manager-test-preload-833225" is "Ready"
	I0630 15:35:34.891735 1597492 pod_ready.go:86] duration metric: took 381.662496ms for pod "kube-controller-manager-test-preload-833225" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:35.092553 1597492 pod_ready.go:83] waiting for pod "kube-proxy-jqwrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:35.492707 1597492 pod_ready.go:94] pod "kube-proxy-jqwrl" is "Ready"
	I0630 15:35:35.492751 1597492 pod_ready.go:86] duration metric: took 400.158956ms for pod "kube-proxy-jqwrl" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:35.692998 1597492 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-833225" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:36.092732 1597492 pod_ready.go:94] pod "kube-scheduler-test-preload-833225" is "Ready"
	I0630 15:35:36.092770 1597492 pod_ready.go:86] duration metric: took 399.73716ms for pod "kube-scheduler-test-preload-833225" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:35:36.092783 1597492 pod_ready.go:40] duration metric: took 1.605514649s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:35:36.142758 1597492 start.go:607] kubectl: 1.33.2, cluster: 1.24.4 (minor skew: 9)
	I0630 15:35:36.144680 1597492 out.go:201] 
	W0630 15:35:36.146298 1597492 out.go:270] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0630 15:35:36.147665 1597492 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0630 15:35:36.148933 1597492 out.go:177] * Done! kubectl is now configured to use "test-preload-833225" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.113723291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751297737113702402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55c97cef-c3f1-4fcb-8233-585625280bfe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.114381322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d3ea10a-659c-4a96-b8e3-6938a083ae30 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.114441771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d3ea10a-659c-4a96-b8e3-6938a083ae30 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.114648695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85c976b5848e2c66ef389753a8eee46c94611f702f75252d333d720c42955f2e,PodSandboxId:cdac6a88b407a58923b376a33a851981e732e27b7f4a15e571a5363e3d53454d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1751297732327687303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w868v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6fdd454-a457-45e6-913e-efbcd6442606,},Annotations:map[string]string{io.kubernetes.container.hash: 9da201b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d63654042f0e49488add819a12be55e8c3aa77986ae9d86de115441e1cf2dd5,PodSandboxId:87819a100652df3d52fe7c19e0a8696c2dbfc09d7e46bc610855af939b74623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751297726273502663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5c402b93-cc4a-49a7-af70-f77df9c53852,},Annotations:map[string]string{io.kubernetes.container.hash: f6a8ed24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953a7e238b3ea296c436c937ca745c41962793a19ed5e01d9ae6166e04ef448e,PodSandboxId:87819a100652df3d52fe7c19e0a8696c2dbfc09d7e46bc610855af939b74623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751297724984733318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5c402b93-cc4a-49a7-af70-f77df9c53852,},Annotations:map[string]string{io.kubernetes.container.hash: f6a8ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffccb514aec89583db287fba13d5fd524b817417ff0238367677d0f92d59332,PodSandboxId:62b366ac04c5869a963268cde5c391fe24e9dbc2df00dd96a935fa67b220a158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1751297724918341999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f3d7c6-caa0-4
767-8ba7-58fa55ad3603,},Annotations:map[string]string{io.kubernetes.container.hash: 925c7052,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e107986ae49ce82e7f63bbb4e4d46d5cc698a515cece386de3a9a6f8315ae73,PodSandboxId:6d5f90970d9761994463cfabc02563ac56b3a2644a50e7d32ee2cb45077e5baa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1751297718893521404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea844d4dde749394d21ebc65ae4a529,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1c873b9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8235a9aa95a27de3badde42f9c98b17daef9f69c82a85139d91f7538ec888e4d,PodSandboxId:579268a31935b4c85698cd37059620bf435bbe7eb3e929ebaf73eb47515307fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1751297718861766407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 715b74d888408726be8abc5fabf4cde8,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1853a9b466232931af122e548ee265f5bed4c7e99fc7b8efb6c569e5d17d82,PodSandboxId:c10f7b3233a999f510df5773753ff8d7a5ff12d1ada1cfa5f3949e8f72d30c1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1751297718795875455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b987b49975533abf74f1ef90e1b3f1e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151f928bfe266b82183ce1578a41827aaf6f6180f2c4fb212028cfe116524486,PodSandboxId:ddd6d3a45d643a9515bcefed966ed134f6fb5de5ddf60542df8d59618ad9bce0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1751297718792426029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa5c1cf3d3269cc1497ea76139b0339,},Annotations:map[string]
string{io.kubernetes.container.hash: cef15eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d3ea10a-659c-4a96-b8e3-6938a083ae30 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.151736845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c831d76-de73-42fa-b6fa-f7c90e0998f8 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.151885201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c831d76-de73-42fa-b6fa-f7c90e0998f8 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.155899771Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e449de6-420b-4a4b-a660-beb206ee7695 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.156334222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751297737156312482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e449de6-420b-4a4b-a660-beb206ee7695 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.157120051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9deca182-cf78-49f4-a76f-a2597d4d674e name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.157336843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9deca182-cf78-49f4-a76f-a2597d4d674e name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.157733404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85c976b5848e2c66ef389753a8eee46c94611f702f75252d333d720c42955f2e,PodSandboxId:cdac6a88b407a58923b376a33a851981e732e27b7f4a15e571a5363e3d53454d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1751297732327687303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w868v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6fdd454-a457-45e6-913e-efbcd6442606,},Annotations:map[string]string{io.kubernetes.container.hash: 9da201b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d63654042f0e49488add819a12be55e8c3aa77986ae9d86de115441e1cf2dd5,PodSandboxId:87819a100652df3d52fe7c19e0a8696c2dbfc09d7e46bc610855af939b74623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751297726273502663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5c402b93-cc4a-49a7-af70-f77df9c53852,},Annotations:map[string]string{io.kubernetes.container.hash: f6a8ed24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953a7e238b3ea296c436c937ca745c41962793a19ed5e01d9ae6166e04ef448e,PodSandboxId:87819a100652df3d52fe7c19e0a8696c2dbfc09d7e46bc610855af939b74623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751297724984733318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5c402b93-cc4a-49a7-af70-f77df9c53852,},Annotations:map[string]string{io.kubernetes.container.hash: f6a8ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffccb514aec89583db287fba13d5fd524b817417ff0238367677d0f92d59332,PodSandboxId:62b366ac04c5869a963268cde5c391fe24e9dbc2df00dd96a935fa67b220a158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1751297724918341999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f3d7c6-caa0-4
767-8ba7-58fa55ad3603,},Annotations:map[string]string{io.kubernetes.container.hash: 925c7052,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e107986ae49ce82e7f63bbb4e4d46d5cc698a515cece386de3a9a6f8315ae73,PodSandboxId:6d5f90970d9761994463cfabc02563ac56b3a2644a50e7d32ee2cb45077e5baa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1751297718893521404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea844d4dde749394d21ebc65ae4a529,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1c873b9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8235a9aa95a27de3badde42f9c98b17daef9f69c82a85139d91f7538ec888e4d,PodSandboxId:579268a31935b4c85698cd37059620bf435bbe7eb3e929ebaf73eb47515307fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1751297718861766407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 715b74d888408726be8abc5fabf4cde8,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1853a9b466232931af122e548ee265f5bed4c7e99fc7b8efb6c569e5d17d82,PodSandboxId:c10f7b3233a999f510df5773753ff8d7a5ff12d1ada1cfa5f3949e8f72d30c1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1751297718795875455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b987b49975533abf74f1ef90e1b3f1e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151f928bfe266b82183ce1578a41827aaf6f6180f2c4fb212028cfe116524486,PodSandboxId:ddd6d3a45d643a9515bcefed966ed134f6fb5de5ddf60542df8d59618ad9bce0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1751297718792426029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa5c1cf3d3269cc1497ea76139b0339,},Annotations:map[string]
string{io.kubernetes.container.hash: cef15eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9deca182-cf78-49f4-a76f-a2597d4d674e name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.195255460Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac5d32b6-a465-4334-8e82-794fc70ef4cf name=/runtime.v1.RuntimeService/Version
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.195337159Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac5d32b6-a465-4334-8e82-794fc70ef4cf name=/runtime.v1.RuntimeService/Version
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.196877477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=863c9713-e17a-43eb-bdbd-d6a25dfe04db name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.197753341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751297737197722094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=863c9713-e17a-43eb-bdbd-d6a25dfe04db name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.198457028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8219f45a-4ec6-488e-9046-4bf57c95fe6d name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.198513259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8219f45a-4ec6-488e-9046-4bf57c95fe6d name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.198723646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85c976b5848e2c66ef389753a8eee46c94611f702f75252d333d720c42955f2e,PodSandboxId:cdac6a88b407a58923b376a33a851981e732e27b7f4a15e571a5363e3d53454d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1751297732327687303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w868v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6fdd454-a457-45e6-913e-efbcd6442606,},Annotations:map[string]string{io.kubernetes.container.hash: 9da201b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d63654042f0e49488add819a12be55e8c3aa77986ae9d86de115441e1cf2dd5,PodSandboxId:87819a100652df3d52fe7c19e0a8696c2dbfc09d7e46bc610855af939b74623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751297726273502663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5c402b93-cc4a-49a7-af70-f77df9c53852,},Annotations:map[string]string{io.kubernetes.container.hash: f6a8ed24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953a7e238b3ea296c436c937ca745c41962793a19ed5e01d9ae6166e04ef448e,PodSandboxId:87819a100652df3d52fe7c19e0a8696c2dbfc09d7e46bc610855af939b74623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751297724984733318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5c402b93-cc4a-49a7-af70-f77df9c53852,},Annotations:map[string]string{io.kubernetes.container.hash: f6a8ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffccb514aec89583db287fba13d5fd524b817417ff0238367677d0f92d59332,PodSandboxId:62b366ac04c5869a963268cde5c391fe24e9dbc2df00dd96a935fa67b220a158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1751297724918341999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f3d7c6-caa0-4
767-8ba7-58fa55ad3603,},Annotations:map[string]string{io.kubernetes.container.hash: 925c7052,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e107986ae49ce82e7f63bbb4e4d46d5cc698a515cece386de3a9a6f8315ae73,PodSandboxId:6d5f90970d9761994463cfabc02563ac56b3a2644a50e7d32ee2cb45077e5baa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1751297718893521404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea844d4dde749394d21ebc65ae4a529,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1c873b9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8235a9aa95a27de3badde42f9c98b17daef9f69c82a85139d91f7538ec888e4d,PodSandboxId:579268a31935b4c85698cd37059620bf435bbe7eb3e929ebaf73eb47515307fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1751297718861766407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 715b74d888408726be8abc5fabf4cde8,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1853a9b466232931af122e548ee265f5bed4c7e99fc7b8efb6c569e5d17d82,PodSandboxId:c10f7b3233a999f510df5773753ff8d7a5ff12d1ada1cfa5f3949e8f72d30c1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1751297718795875455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b987b49975533abf74f1ef90e1b3f1e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151f928bfe266b82183ce1578a41827aaf6f6180f2c4fb212028cfe116524486,PodSandboxId:ddd6d3a45d643a9515bcefed966ed134f6fb5de5ddf60542df8d59618ad9bce0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1751297718792426029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa5c1cf3d3269cc1497ea76139b0339,},Annotations:map[string]
string{io.kubernetes.container.hash: cef15eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8219f45a-4ec6-488e-9046-4bf57c95fe6d name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.236962654Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2bb9e3ba-afac-41a2-968f-ce757f22d6ef name=/runtime.v1.RuntimeService/Version
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.237031303Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2bb9e3ba-afac-41a2-968f-ce757f22d6ef name=/runtime.v1.RuntimeService/Version
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.238353816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e4308e7-9ef8-44b0-a34f-c0f5592b7ffd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.238949265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751297737238771099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e4308e7-9ef8-44b0-a34f-c0f5592b7ffd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.239366186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5083c37f-553d-4b74-a6ed-337379b4efcd name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.239413209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5083c37f-553d-4b74-a6ed-337379b4efcd name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:35:37 test-preload-833225 crio[871]: time="2025-06-30 15:35:37.239620163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85c976b5848e2c66ef389753a8eee46c94611f702f75252d333d720c42955f2e,PodSandboxId:cdac6a88b407a58923b376a33a851981e732e27b7f4a15e571a5363e3d53454d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1751297732327687303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w868v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6fdd454-a457-45e6-913e-efbcd6442606,},Annotations:map[string]string{io.kubernetes.container.hash: 9da201b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d63654042f0e49488add819a12be55e8c3aa77986ae9d86de115441e1cf2dd5,PodSandboxId:87819a100652df3d52fe7c19e0a8696c2dbfc09d7e46bc610855af939b74623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751297726273502663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5c402b93-cc4a-49a7-af70-f77df9c53852,},Annotations:map[string]string{io.kubernetes.container.hash: f6a8ed24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953a7e238b3ea296c436c937ca745c41962793a19ed5e01d9ae6166e04ef448e,PodSandboxId:87819a100652df3d52fe7c19e0a8696c2dbfc09d7e46bc610855af939b74623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751297724984733318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 5c402b93-cc4a-49a7-af70-f77df9c53852,},Annotations:map[string]string{io.kubernetes.container.hash: f6a8ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffccb514aec89583db287fba13d5fd524b817417ff0238367677d0f92d59332,PodSandboxId:62b366ac04c5869a963268cde5c391fe24e9dbc2df00dd96a935fa67b220a158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1751297724918341999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqwrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f3d7c6-caa0-4
767-8ba7-58fa55ad3603,},Annotations:map[string]string{io.kubernetes.container.hash: 925c7052,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e107986ae49ce82e7f63bbb4e4d46d5cc698a515cece386de3a9a6f8315ae73,PodSandboxId:6d5f90970d9761994463cfabc02563ac56b3a2644a50e7d32ee2cb45077e5baa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1751297718893521404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea844d4dde749394d21ebc65ae4a529,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1c873b9f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8235a9aa95a27de3badde42f9c98b17daef9f69c82a85139d91f7538ec888e4d,PodSandboxId:579268a31935b4c85698cd37059620bf435bbe7eb3e929ebaf73eb47515307fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1751297718861766407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 715b74d888408726be8abc5fabf4cde8,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1853a9b466232931af122e548ee265f5bed4c7e99fc7b8efb6c569e5d17d82,PodSandboxId:c10f7b3233a999f510df5773753ff8d7a5ff12d1ada1cfa5f3949e8f72d30c1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1751297718795875455,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b987b49975533abf74f1ef90e1b3f1e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151f928bfe266b82183ce1578a41827aaf6f6180f2c4fb212028cfe116524486,PodSandboxId:ddd6d3a45d643a9515bcefed966ed134f6fb5de5ddf60542df8d59618ad9bce0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1751297718792426029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-833225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa5c1cf3d3269cc1497ea76139b0339,},Annotations:map[string]
string{io.kubernetes.container.hash: cef15eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5083c37f-553d-4b74-a6ed-337379b4efcd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85c976b5848e2       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   cdac6a88b407a       coredns-6d4b75cb6d-w868v
	3d63654042f0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       2                   87819a100652d       storage-provisioner
	953a7e238b3ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Exited              storage-provisioner       1                   87819a100652d       storage-provisioner
	7ffccb514aec8       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   62b366ac04c58       kube-proxy-jqwrl
	0e107986ae49c       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   6d5f90970d976       etcd-test-preload-833225
	8235a9aa95a27       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   579268a31935b       kube-controller-manager-test-preload-833225
	9b1853a9b4662       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   c10f7b3233a99       kube-scheduler-test-preload-833225
	151f928bfe266       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   ddd6d3a45d643       kube-apiserver-test-preload-833225
	
	
	==> coredns [85c976b5848e2c66ef389753a8eee46c94611f702f75252d333d720c42955f2e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:49775 - 61260 "HINFO IN 7165453595012744509.6334433393900046559. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029333673s
	
	
	==> describe nodes <==
	Name:               test-preload-833225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-833225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=test-preload-833225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T15_33_56_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 15:33:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-833225
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 15:35:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 15:35:33 +0000   Mon, 30 Jun 2025 15:33:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 15:35:33 +0000   Mon, 30 Jun 2025 15:33:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 15:35:33 +0000   Mon, 30 Jun 2025 15:33:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 15:35:33 +0000   Mon, 30 Jun 2025 15:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    test-preload-833225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	System Info:
	  Machine ID:                 458de3235b4a44d1901fc7a20e3dd229
	  System UUID:                458de323-5b4a-44d1-901f-c7a20e3dd229
	  Boot ID:                    0a06439a-d974-4a44-bef9-c487162047ef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-w868v                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     88s
	  kube-system                 etcd-test-preload-833225                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         101s
	  kube-system                 kube-apiserver-test-preload-833225             250m (12%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-test-preload-833225    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-jqwrl                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-test-preload-833225             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12s                  kube-proxy       
	  Normal  Starting                 87s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  109s (x4 over 109s)  kubelet          Node test-preload-833225 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x4 over 109s)  kubelet          Node test-preload-833225 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x3 over 109s)  kubelet          Node test-preload-833225 status is now: NodeHasSufficientPID
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s                 kubelet          Node test-preload-833225 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s                 kubelet          Node test-preload-833225 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s                 kubelet          Node test-preload-833225 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                91s                  kubelet          Node test-preload-833225 status is now: NodeReady
	  Normal  RegisteredNode           88s                  node-controller  Node test-preload-833225 event: Registered Node test-preload-833225 in Controller
	  Normal  Starting                 19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)    kubelet          Node test-preload-833225 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)    kubelet          Node test-preload-833225 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)    kubelet          Node test-preload-833225 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                   node-controller  Node test-preload-833225 event: Registered Node test-preload-833225 in Controller
	
	
	==> dmesg <==
	[Jun30 15:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001254] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004195] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.059256] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun30 15:35] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.105355] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.267800] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.000035] kauditd_printk_skb: 34 callbacks suppressed
	
	
	==> etcd [0e107986ae49ce82e7f63bbb4e4d46d5cc698a515cece386de3a9a6f8315ae73] <==
	{"level":"info","ts":"2025-06-30T15:35:19.191Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"59d4e9d626571860","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-06-30T15:35:19.192Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-06-30T15:35:19.193Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-06-30T15:35:19.193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 switched to configuration voters=(6473055670413760608)"}
	{"level":"info","ts":"2025-06-30T15:35:19.193Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","added-peer-id":"59d4e9d626571860","added-peer-peer-urls":["https://192.168.39.161:2380"]}
	{"level":"info","ts":"2025-06-30T15:35:19.193Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:35:19.194Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:35:19.193Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2025-06-30T15:35:19.196Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2025-06-30T15:35:19.196Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"59d4e9d626571860","initial-advertise-peer-urls":["https://192.168.39.161:2380"],"listen-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.161:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-06-30T15:35:19.197Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-06-30T15:35:20.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 is starting a new election at term 2"}
	{"level":"info","ts":"2025-06-30T15:35:20.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-06-30T15:35:20.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgPreVoteResp from 59d4e9d626571860 at term 2"}
	{"level":"info","ts":"2025-06-30T15:35:20.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became candidate at term 3"}
	{"level":"info","ts":"2025-06-30T15:35:20.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgVoteResp from 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2025-06-30T15:35:20.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became leader at term 3"}
	{"level":"info","ts":"2025-06-30T15:35:20.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 59d4e9d626571860 elected leader 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2025-06-30T15:35:20.968Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"59d4e9d626571860","local-member-attributes":"{Name:test-preload-833225 ClientURLs:[https://192.168.39.161:2379]}","request-path":"/0/members/59d4e9d626571860/attributes","cluster-id":"641f62d988bc06c1","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T15:35:20.968Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:35:20.969Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T15:35:20.969Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T15:35:20.969Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:35:20.970Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.161:2379"}
	{"level":"info","ts":"2025-06-30T15:35:20.971Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:35:37 up 0 min,  0 users,  load average: 1.04, 0.30, 0.10
	Linux test-preload-833225 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [151f928bfe266b82183ce1578a41827aaf6f6180f2c4fb212028cfe116524486] <==
	I0630 15:35:23.360582       1 establishing_controller.go:76] Starting EstablishingController
	I0630 15:35:23.360612       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0630 15:35:23.360630       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0630 15:35:23.360663       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0630 15:35:23.373471       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0630 15:35:23.392600       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0630 15:35:23.501311       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0630 15:35:23.538342       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0630 15:35:23.550890       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0630 15:35:23.550968       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0630 15:35:23.551560       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0630 15:35:23.567716       1 cache.go:39] Caches are synced for autoregister controller
	I0630 15:35:23.568428       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0630 15:35:23.568500       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0630 15:35:23.568953       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0630 15:35:24.029531       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0630 15:35:24.347168       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0630 15:35:24.883104       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0630 15:35:24.906246       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0630 15:35:24.963918       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0630 15:35:24.997011       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0630 15:35:25.010198       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0630 15:35:25.328031       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0630 15:35:35.931389       1 controller.go:611] quota admission added evaluator for: endpoints
	I0630 15:35:36.031346       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8235a9aa95a27de3badde42f9c98b17daef9f69c82a85139d91f7538ec888e4d] <==
	I0630 15:35:35.960470       1 shared_informer.go:262] Caches are synced for TTL
	I0630 15:35:35.973235       1 shared_informer.go:262] Caches are synced for GC
	I0630 15:35:36.018323       1 shared_informer.go:262] Caches are synced for deployment
	I0630 15:35:36.020292       1 shared_informer.go:262] Caches are synced for disruption
	I0630 15:35:36.020381       1 disruption.go:371] Sending events to api server.
	I0630 15:35:36.027478       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0630 15:35:36.037490       1 shared_informer.go:262] Caches are synced for ephemeral
	I0630 15:35:36.040834       1 shared_informer.go:262] Caches are synced for resource quota
	I0630 15:35:36.059283       1 shared_informer.go:262] Caches are synced for persistent volume
	I0630 15:35:36.069423       1 shared_informer.go:262] Caches are synced for daemon sets
	I0630 15:35:36.084631       1 shared_informer.go:262] Caches are synced for taint
	I0630 15:35:36.084772       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0630 15:35:36.084953       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0630 15:35:36.084974       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-833225. Assuming now as a timestamp.
	I0630 15:35:36.085081       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0630 15:35:36.085319       1 event.go:294] "Event occurred" object="test-preload-833225" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-833225 event: Registered Node test-preload-833225 in Controller"
	I0630 15:35:36.089563       1 shared_informer.go:262] Caches are synced for expand
	I0630 15:35:36.097583       1 shared_informer.go:262] Caches are synced for resource quota
	I0630 15:35:36.108461       1 shared_informer.go:262] Caches are synced for stateful set
	I0630 15:35:36.110292       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0630 15:35:36.111454       1 shared_informer.go:262] Caches are synced for PVC protection
	I0630 15:35:36.128849       1 shared_informer.go:262] Caches are synced for attach detach
	I0630 15:35:36.582271       1 shared_informer.go:262] Caches are synced for garbage collector
	I0630 15:35:36.602020       1 shared_informer.go:262] Caches are synced for garbage collector
	I0630 15:35:36.602145       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [7ffccb514aec89583db287fba13d5fd524b817417ff0238367677d0f92d59332] <==
	I0630 15:35:25.283967       1 node.go:163] Successfully retrieved node IP: 192.168.39.161
	I0630 15:35:25.284159       1 server_others.go:138] "Detected node IP" address="192.168.39.161"
	I0630 15:35:25.284252       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0630 15:35:25.323068       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0630 15:35:25.323085       1 server_others.go:206] "Using iptables Proxier"
	I0630 15:35:25.323114       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0630 15:35:25.323340       1 server.go:661] "Version info" version="v1.24.4"
	I0630 15:35:25.323348       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 15:35:25.324946       1 config.go:317] "Starting service config controller"
	I0630 15:35:25.325046       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0630 15:35:25.325086       1 config.go:226] "Starting endpoint slice config controller"
	I0630 15:35:25.325152       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0630 15:35:25.325670       1 config.go:444] "Starting node config controller"
	I0630 15:35:25.325737       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0630 15:35:25.425447       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0630 15:35:25.425489       1 shared_informer.go:262] Caches are synced for service config
	I0630 15:35:25.426146       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [9b1853a9b466232931af122e548ee265f5bed4c7e99fc7b8efb6c569e5d17d82] <==
	I0630 15:35:20.180085       1 serving.go:348] Generated self-signed cert in-memory
	W0630 15:35:23.454537       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0630 15:35:23.456879       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0630 15:35:23.457001       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 15:35:23.457029       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 15:35:23.505199       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0630 15:35:23.505234       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 15:35:23.508460       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0630 15:35:23.509143       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 15:35:23.509171       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0630 15:35:23.509190       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0630 15:35:23.610674       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 30 15:35:23 test-preload-833225 kubelet[1496]: I0630 15:35:23.578607    1496 setters.go:532] "Node became not ready" node="test-preload-833225" condition={Type:Ready Status:False LastHeartbeatTime:2025-06-30 15:35:23.578558261 +0000 UTC m=+5.613714686 LastTransitionTime:2025-06-30 15:35:23.578558261 +0000 UTC m=+5.613714686 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.092017    1496 apiserver.go:52] "Watching apiserver"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.094943    1496 topology_manager.go:200] "Topology Admit Handler"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.095062    1496 topology_manager.go:200] "Topology Admit Handler"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.095098    1496 topology_manager.go:200] "Topology Admit Handler"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: E0630 15:35:24.098223    1496 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-w868v" podUID=f6fdd454-a457-45e6-913e-efbcd6442606
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.152056    1496 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6fdd454-a457-45e6-913e-efbcd6442606-config-volume\") pod \"coredns-6d4b75cb6d-w868v\" (UID: \"f6fdd454-a457-45e6-913e-efbcd6442606\") " pod="kube-system/coredns-6d4b75cb6d-w868v"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.152117    1496 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63f3d7c6-caa0-4767-8ba7-58fa55ad3603-kube-proxy\") pod \"kube-proxy-jqwrl\" (UID: \"63f3d7c6-caa0-4767-8ba7-58fa55ad3603\") " pod="kube-system/kube-proxy-jqwrl"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.152143    1496 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63f3d7c6-caa0-4767-8ba7-58fa55ad3603-xtables-lock\") pod \"kube-proxy-jqwrl\" (UID: \"63f3d7c6-caa0-4767-8ba7-58fa55ad3603\") " pod="kube-system/kube-proxy-jqwrl"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.152162    1496 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27mr\" (UniqueName: \"kubernetes.io/projected/f6fdd454-a457-45e6-913e-efbcd6442606-kube-api-access-b27mr\") pod \"coredns-6d4b75cb6d-w868v\" (UID: \"f6fdd454-a457-45e6-913e-efbcd6442606\") " pod="kube-system/coredns-6d4b75cb6d-w868v"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.152186    1496 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63f3d7c6-caa0-4767-8ba7-58fa55ad3603-lib-modules\") pod \"kube-proxy-jqwrl\" (UID: \"63f3d7c6-caa0-4767-8ba7-58fa55ad3603\") " pod="kube-system/kube-proxy-jqwrl"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.152206    1496 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrtkv\" (UniqueName: \"kubernetes.io/projected/63f3d7c6-caa0-4767-8ba7-58fa55ad3603-kube-api-access-qrtkv\") pod \"kube-proxy-jqwrl\" (UID: \"63f3d7c6-caa0-4767-8ba7-58fa55ad3603\") " pod="kube-system/kube-proxy-jqwrl"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.152224    1496 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c402b93-cc4a-49a7-af70-f77df9c53852-tmp\") pod \"storage-provisioner\" (UID: \"5c402b93-cc4a-49a7-af70-f77df9c53852\") " pod="kube-system/storage-provisioner"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.152242    1496 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rgzb\" (UniqueName: \"kubernetes.io/projected/5c402b93-cc4a-49a7-af70-f77df9c53852-kube-api-access-7rgzb\") pod \"storage-provisioner\" (UID: \"5c402b93-cc4a-49a7-af70-f77df9c53852\") " pod="kube-system/storage-provisioner"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: I0630 15:35:24.152252    1496 reconciler.go:159] "Reconciler: start to sync state"
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: E0630 15:35:24.257293    1496 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: E0630 15:35:24.257452    1496 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f6fdd454-a457-45e6-913e-efbcd6442606-config-volume podName:f6fdd454-a457-45e6-913e-efbcd6442606 nodeName:}" failed. No retries permitted until 2025-06-30 15:35:24.757410959 +0000 UTC m=+6.792567394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f6fdd454-a457-45e6-913e-efbcd6442606-config-volume") pod "coredns-6d4b75cb6d-w868v" (UID: "f6fdd454-a457-45e6-913e-efbcd6442606") : object "kube-system"/"coredns" not registered
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: E0630 15:35:24.760060    1496 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 30 15:35:24 test-preload-833225 kubelet[1496]: E0630 15:35:24.760117    1496 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f6fdd454-a457-45e6-913e-efbcd6442606-config-volume podName:f6fdd454-a457-45e6-913e-efbcd6442606 nodeName:}" failed. No retries permitted until 2025-06-30 15:35:25.760104229 +0000 UTC m=+7.795260667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f6fdd454-a457-45e6-913e-efbcd6442606-config-volume") pod "coredns-6d4b75cb6d-w868v" (UID: "f6fdd454-a457-45e6-913e-efbcd6442606") : object "kube-system"/"coredns" not registered
	Jun 30 15:35:25 test-preload-833225 kubelet[1496]: E0630 15:35:25.769322    1496 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 30 15:35:25 test-preload-833225 kubelet[1496]: E0630 15:35:25.769931    1496 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f6fdd454-a457-45e6-913e-efbcd6442606-config-volume podName:f6fdd454-a457-45e6-913e-efbcd6442606 nodeName:}" failed. No retries permitted until 2025-06-30 15:35:27.769912431 +0000 UTC m=+9.805068870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f6fdd454-a457-45e6-913e-efbcd6442606-config-volume") pod "coredns-6d4b75cb6d-w868v" (UID: "f6fdd454-a457-45e6-913e-efbcd6442606") : object "kube-system"/"coredns" not registered
	Jun 30 15:35:26 test-preload-833225 kubelet[1496]: E0630 15:35:26.195438    1496 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-w868v" podUID=f6fdd454-a457-45e6-913e-efbcd6442606
	Jun 30 15:35:26 test-preload-833225 kubelet[1496]: I0630 15:35:26.247139    1496 scope.go:110] "RemoveContainer" containerID="953a7e238b3ea296c436c937ca745c41962793a19ed5e01d9ae6166e04ef448e"
	Jun 30 15:35:27 test-preload-833225 kubelet[1496]: E0630 15:35:27.780704    1496 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 30 15:35:27 test-preload-833225 kubelet[1496]: E0630 15:35:27.781215    1496 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f6fdd454-a457-45e6-913e-efbcd6442606-config-volume podName:f6fdd454-a457-45e6-913e-efbcd6442606 nodeName:}" failed. No retries permitted until 2025-06-30 15:35:31.781170983 +0000 UTC m=+13.816327419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f6fdd454-a457-45e6-913e-efbcd6442606-config-volume") pod "coredns-6d4b75cb6d-w868v" (UID: "f6fdd454-a457-45e6-913e-efbcd6442606") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [3d63654042f0e49488add819a12be55e8c3aa77986ae9d86de115441e1cf2dd5] <==
	I0630 15:35:26.500560       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0630 15:35:26.527908       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0630 15:35:26.527965       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [953a7e238b3ea296c436c937ca745c41962793a19ed5e01d9ae6166e04ef448e] <==
	I0630 15:35:25.221736       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0630 15:35:25.231178       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-833225 -n test-preload-833225
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-833225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-833225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-833225
--- FAIL: TestPreload (178.26s)

                                                
                                    
x
+
TestScheduledStopUnix (49.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-224018 --memory=3072 --driver=kvm2  --container-runtime=crio
E0630 15:36:04.687244 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-224018 --memory=3072 --driver=kvm2  --container-runtime=crio: (46.49493998s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224018 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-224018 -n scheduled-stop-224018
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224018 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 1598553 running but should have been killed on reschedule of stop
panic.go:631: *** TestScheduledStopUnix FAILED at 2025-06-30 15:36:26.199629383 +0000 UTC m=+4729.287510194
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-224018 -n scheduled-stop-224018
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p scheduled-stop-224018 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p scheduled-stop-224018 logs -n 25: (1.062674317s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-973445            | multinode-973445      | jenkins | v1.36.0 | 30 Jun 25 15:21 UTC | 30 Jun 25 15:24 UTC |
	| start   | -p multinode-973445            | multinode-973445      | jenkins | v1.36.0 | 30 Jun 25 15:24 UTC | 30 Jun 25 15:27 UTC |
	|         | --wait=true -v=5               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-973445       | multinode-973445      | jenkins | v1.36.0 | 30 Jun 25 15:27 UTC |                     |
	| node    | multinode-973445 node delete   | multinode-973445      | jenkins | v1.36.0 | 30 Jun 25 15:27 UTC | 30 Jun 25 15:27 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-973445 stop          | multinode-973445      | jenkins | v1.36.0 | 30 Jun 25 15:27 UTC | 30 Jun 25 15:30 UTC |
	| start   | -p multinode-973445            | multinode-973445      | jenkins | v1.36.0 | 30 Jun 25 15:30 UTC | 30 Jun 25 15:31 UTC |
	|         | --wait=true -v=5               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=kvm2                  |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | list -p multinode-973445       | multinode-973445      | jenkins | v1.36.0 | 30 Jun 25 15:31 UTC |                     |
	| start   | -p multinode-973445-m02        | multinode-973445-m02  | jenkins | v1.36.0 | 30 Jun 25 15:31 UTC |                     |
	|         | --driver=kvm2                  |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| start   | -p multinode-973445-m03        | multinode-973445-m03  | jenkins | v1.36.0 | 30 Jun 25 15:31 UTC | 30 Jun 25 15:32 UTC |
	|         | --driver=kvm2                  |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | add -p multinode-973445        | multinode-973445      | jenkins | v1.36.0 | 30 Jun 25 15:32 UTC |                     |
	| delete  | -p multinode-973445-m03        | multinode-973445-m03  | jenkins | v1.36.0 | 30 Jun 25 15:32 UTC | 30 Jun 25 15:32 UTC |
	| delete  | -p multinode-973445            | multinode-973445      | jenkins | v1.36.0 | 30 Jun 25 15:32 UTC | 30 Jun 25 15:32 UTC |
	| start   | -p test-preload-833225         | test-preload-833225   | jenkins | v1.36.0 | 30 Jun 25 15:32 UTC | 30 Jun 25 15:34 UTC |
	|         | --memory=3072                  |                       |         |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                       |         |         |                     |                     |
	|         | --preload=false --driver=kvm2  |                       |         |         |                     |                     |
	|         |  --container-runtime=crio      |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-833225 image pull | test-preload-833225   | jenkins | v1.36.0 | 30 Jun 25 15:34 UTC | 30 Jun 25 15:34 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-833225         | test-preload-833225   | jenkins | v1.36.0 | 30 Jun 25 15:34 UTC | 30 Jun 25 15:34 UTC |
	| start   | -p test-preload-833225         | test-preload-833225   | jenkins | v1.36.0 | 30 Jun 25 15:34 UTC | 30 Jun 25 15:35 UTC |
	|         | --memory=3072                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| image   | test-preload-833225 image list | test-preload-833225   | jenkins | v1.36.0 | 30 Jun 25 15:35 UTC | 30 Jun 25 15:35 UTC |
	| delete  | -p test-preload-833225         | test-preload-833225   | jenkins | v1.36.0 | 30 Jun 25 15:35 UTC | 30 Jun 25 15:35 UTC |
	| start   | -p scheduled-stop-224018       | scheduled-stop-224018 | jenkins | v1.36.0 | 30 Jun 25 15:35 UTC | 30 Jun 25 15:36 UTC |
	|         | --memory=3072 --driver=kvm2    |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-224018       | scheduled-stop-224018 | jenkins | v1.36.0 | 30 Jun 25 15:36 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-224018       | scheduled-stop-224018 | jenkins | v1.36.0 | 30 Jun 25 15:36 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-224018       | scheduled-stop-224018 | jenkins | v1.36.0 | 30 Jun 25 15:36 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-224018       | scheduled-stop-224018 | jenkins | v1.36.0 | 30 Jun 25 15:36 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-224018       | scheduled-stop-224018 | jenkins | v1.36.0 | 30 Jun 25 15:36 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-224018       | scheduled-stop-224018 | jenkins | v1.36.0 | 30 Jun 25 15:36 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 15:35:39
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 15:35:39.363531 1598127 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:35:39.363788 1598127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:35:39.363791 1598127 out.go:358] Setting ErrFile to fd 2...
	I0630 15:35:39.363794 1598127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:35:39.364024 1598127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:35:39.364642 1598127 out.go:352] Setting JSON to false
	I0630 15:35:39.365903 1598127 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33431,"bootTime":1751264308,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:35:39.366022 1598127 start.go:140] virtualization: kvm guest
	I0630 15:35:39.368694 1598127 out.go:177] * [scheduled-stop-224018] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:35:39.370098 1598127 notify.go:220] Checking for updates...
	I0630 15:35:39.370117 1598127 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:35:39.371471 1598127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:35:39.372757 1598127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:35:39.373843 1598127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:35:39.375463 1598127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:35:39.376719 1598127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:35:39.378173 1598127 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:35:39.417672 1598127 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 15:35:39.418892 1598127 start.go:304] selected driver: kvm2
	I0630 15:35:39.418903 1598127 start.go:908] validating driver "kvm2" against <nil>
	I0630 15:35:39.418915 1598127 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:35:39.419782 1598127 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:35:39.419860 1598127 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:35:39.436905 1598127 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:35:39.436952 1598127 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 15:35:39.437198 1598127 start_flags.go:972] Wait components to verify : map[apiserver:true system_pods:true]
	I0630 15:35:39.437224 1598127 cni.go:84] Creating CNI manager for ""
	I0630 15:35:39.437267 1598127 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:35:39.437272 1598127 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 15:35:39.437341 1598127 start.go:347] cluster config:
	{Name:scheduled-stop-224018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:scheduled-stop-224018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:35:39.437511 1598127 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:35:39.439471 1598127 out.go:177] * Starting "scheduled-stop-224018" primary control-plane node in "scheduled-stop-224018" cluster
	I0630 15:35:39.441081 1598127 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:35:39.441125 1598127 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 15:35:39.441132 1598127 cache.go:56] Caching tarball of preloaded images
	I0630 15:35:39.441242 1598127 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:35:39.441262 1598127 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 15:35:39.441615 1598127 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/config.json ...
	I0630 15:35:39.441667 1598127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/config.json: {Name:mk3fba5b73626f585360943c88e7b77e41b5dff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:35:39.441849 1598127 start.go:360] acquireMachinesLock for scheduled-stop-224018: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:35:39.441878 1598127 start.go:364] duration metric: took 18.98µs to acquireMachinesLock for "scheduled-stop-224018"
	I0630 15:35:39.441894 1598127 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-224018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
33.2 ClusterName:scheduled-stop-224018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:35:39.441942 1598127 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 15:35:39.443667 1598127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0630 15:35:39.443825 1598127 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:35:39.443859 1598127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:35:39.459540 1598127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0630 15:35:39.460019 1598127 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:35:39.460556 1598127 main.go:141] libmachine: Using API Version  1
	I0630 15:35:39.460574 1598127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:35:39.461025 1598127 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:35:39.461321 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetMachineName
	I0630 15:35:39.461584 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:35:39.461780 1598127 start.go:159] libmachine.API.Create for "scheduled-stop-224018" (driver="kvm2")
	I0630 15:35:39.461805 1598127 client.go:168] LocalClient.Create starting
	I0630 15:35:39.461842 1598127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 15:35:39.461877 1598127 main.go:141] libmachine: Decoding PEM data...
	I0630 15:35:39.461890 1598127 main.go:141] libmachine: Parsing certificate...
	I0630 15:35:39.461955 1598127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 15:35:39.461969 1598127 main.go:141] libmachine: Decoding PEM data...
	I0630 15:35:39.461979 1598127 main.go:141] libmachine: Parsing certificate...
	I0630 15:35:39.461990 1598127 main.go:141] libmachine: Running pre-create checks...
	I0630 15:35:39.462002 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .PreCreateCheck
	I0630 15:35:39.462482 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetConfigRaw
	I0630 15:35:39.462910 1598127 main.go:141] libmachine: Creating machine...
	I0630 15:35:39.462917 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .Create
	I0630 15:35:39.463075 1598127 main.go:141] libmachine: (scheduled-stop-224018) creating KVM machine...
	I0630 15:35:39.463089 1598127 main.go:141] libmachine: (scheduled-stop-224018) creating network...
	I0630 15:35:39.464948 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found existing default KVM network
	I0630 15:35:39.466004 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:39.465801 1598150 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000137e0}
	I0630 15:35:39.466162 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | created network xml: 
	I0630 15:35:39.466180 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | <network>
	I0630 15:35:39.466191 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG |   <name>mk-scheduled-stop-224018</name>
	I0630 15:35:39.466198 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG |   <dns enable='no'/>
	I0630 15:35:39.466205 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG |   
	I0630 15:35:39.466213 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 15:35:39.466225 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG |     <dhcp>
	I0630 15:35:39.466237 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 15:35:39.466254 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG |     </dhcp>
	I0630 15:35:39.466263 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG |   </ip>
	I0630 15:35:39.466270 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG |   
	I0630 15:35:39.466273 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | </network>
	I0630 15:35:39.466281 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | 
	I0630 15:35:39.472129 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | trying to create private KVM network mk-scheduled-stop-224018 192.168.39.0/24...
	I0630 15:35:39.553372 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | private KVM network mk-scheduled-stop-224018 192.168.39.0/24 created
	I0630 15:35:39.553389 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:39.553320 1598150 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:35:39.553467 1598127 main.go:141] libmachine: (scheduled-stop-224018) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018 ...
	I0630 15:35:39.553491 1598127 main.go:141] libmachine: (scheduled-stop-224018) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 15:35:39.553511 1598127 main.go:141] libmachine: (scheduled-stop-224018) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 15:35:39.887913 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:39.887753 1598150 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/id_rsa...
	I0630 15:35:40.112967 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:40.112780 1598150 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/scheduled-stop-224018.rawdisk...
	I0630 15:35:40.112990 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | Writing magic tar header
	I0630 15:35:40.113006 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | Writing SSH key tar header
	I0630 15:35:40.113017 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:40.112906 1598150 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018 ...
	I0630 15:35:40.113030 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018
	I0630 15:35:40.113039 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 15:35:40.113050 1598127 main.go:141] libmachine: (scheduled-stop-224018) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018 (perms=drwx------)
	I0630 15:35:40.113061 1598127 main.go:141] libmachine: (scheduled-stop-224018) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 15:35:40.113085 1598127 main.go:141] libmachine: (scheduled-stop-224018) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 15:35:40.113092 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:35:40.113105 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 15:35:40.113112 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 15:35:40.113121 1598127 main.go:141] libmachine: (scheduled-stop-224018) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 15:35:40.113131 1598127 main.go:141] libmachine: (scheduled-stop-224018) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 15:35:40.113138 1598127 main.go:141] libmachine: (scheduled-stop-224018) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 15:35:40.113145 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | checking permissions on dir: /home/jenkins
	I0630 15:35:40.113149 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | checking permissions on dir: /home
	I0630 15:35:40.113152 1598127 main.go:141] libmachine: (scheduled-stop-224018) creating domain...
	I0630 15:35:40.113160 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | skipping /home - not owner
	I0630 15:35:40.114684 1598127 main.go:141] libmachine: (scheduled-stop-224018) define libvirt domain using xml: 
	I0630 15:35:40.114698 1598127 main.go:141] libmachine: (scheduled-stop-224018) <domain type='kvm'>
	I0630 15:35:40.114703 1598127 main.go:141] libmachine: (scheduled-stop-224018)   <name>scheduled-stop-224018</name>
	I0630 15:35:40.114707 1598127 main.go:141] libmachine: (scheduled-stop-224018)   <memory unit='MiB'>3072</memory>
	I0630 15:35:40.114711 1598127 main.go:141] libmachine: (scheduled-stop-224018)   <vcpu>2</vcpu>
	I0630 15:35:40.114714 1598127 main.go:141] libmachine: (scheduled-stop-224018)   <features>
	I0630 15:35:40.114718 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <acpi/>
	I0630 15:35:40.114721 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <apic/>
	I0630 15:35:40.114733 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <pae/>
	I0630 15:35:40.114736 1598127 main.go:141] libmachine: (scheduled-stop-224018)     
	I0630 15:35:40.114741 1598127 main.go:141] libmachine: (scheduled-stop-224018)   </features>
	I0630 15:35:40.114744 1598127 main.go:141] libmachine: (scheduled-stop-224018)   <cpu mode='host-passthrough'>
	I0630 15:35:40.114748 1598127 main.go:141] libmachine: (scheduled-stop-224018)   
	I0630 15:35:40.114751 1598127 main.go:141] libmachine: (scheduled-stop-224018)   </cpu>
	I0630 15:35:40.114754 1598127 main.go:141] libmachine: (scheduled-stop-224018)   <os>
	I0630 15:35:40.114757 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <type>hvm</type>
	I0630 15:35:40.114761 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <boot dev='cdrom'/>
	I0630 15:35:40.114765 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <boot dev='hd'/>
	I0630 15:35:40.114769 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <bootmenu enable='no'/>
	I0630 15:35:40.114772 1598127 main.go:141] libmachine: (scheduled-stop-224018)   </os>
	I0630 15:35:40.114775 1598127 main.go:141] libmachine: (scheduled-stop-224018)   <devices>
	I0630 15:35:40.114779 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <disk type='file' device='cdrom'>
	I0630 15:35:40.114819 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/boot2docker.iso'/>
	I0630 15:35:40.114830 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <target dev='hdc' bus='scsi'/>
	I0630 15:35:40.114836 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <readonly/>
	I0630 15:35:40.114840 1598127 main.go:141] libmachine: (scheduled-stop-224018)     </disk>
	I0630 15:35:40.114845 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <disk type='file' device='disk'>
	I0630 15:35:40.114867 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 15:35:40.114875 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/scheduled-stop-224018.rawdisk'/>
	I0630 15:35:40.114880 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <target dev='hda' bus='virtio'/>
	I0630 15:35:40.114884 1598127 main.go:141] libmachine: (scheduled-stop-224018)     </disk>
	I0630 15:35:40.114888 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <interface type='network'>
	I0630 15:35:40.114892 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <source network='mk-scheduled-stop-224018'/>
	I0630 15:35:40.114896 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <model type='virtio'/>
	I0630 15:35:40.114899 1598127 main.go:141] libmachine: (scheduled-stop-224018)     </interface>
	I0630 15:35:40.114905 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <interface type='network'>
	I0630 15:35:40.114909 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <source network='default'/>
	I0630 15:35:40.114914 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <model type='virtio'/>
	I0630 15:35:40.114917 1598127 main.go:141] libmachine: (scheduled-stop-224018)     </interface>
	I0630 15:35:40.114922 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <serial type='pty'>
	I0630 15:35:40.114926 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <target port='0'/>
	I0630 15:35:40.114936 1598127 main.go:141] libmachine: (scheduled-stop-224018)     </serial>
	I0630 15:35:40.114940 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <console type='pty'>
	I0630 15:35:40.114944 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <target type='serial' port='0'/>
	I0630 15:35:40.114948 1598127 main.go:141] libmachine: (scheduled-stop-224018)     </console>
	I0630 15:35:40.114951 1598127 main.go:141] libmachine: (scheduled-stop-224018)     <rng model='virtio'>
	I0630 15:35:40.114955 1598127 main.go:141] libmachine: (scheduled-stop-224018)       <backend model='random'>/dev/random</backend>
	I0630 15:35:40.114958 1598127 main.go:141] libmachine: (scheduled-stop-224018)     </rng>
	I0630 15:35:40.114961 1598127 main.go:141] libmachine: (scheduled-stop-224018)     
	I0630 15:35:40.114964 1598127 main.go:141] libmachine: (scheduled-stop-224018)     
	I0630 15:35:40.114968 1598127 main.go:141] libmachine: (scheduled-stop-224018)   </devices>
	I0630 15:35:40.114970 1598127 main.go:141] libmachine: (scheduled-stop-224018) </domain>
	I0630 15:35:40.114979 1598127 main.go:141] libmachine: (scheduled-stop-224018) 
	I0630 15:35:40.119991 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:7b:5a:14 in network default
	I0630 15:35:40.120744 1598127 main.go:141] libmachine: (scheduled-stop-224018) starting domain...
	I0630 15:35:40.120759 1598127 main.go:141] libmachine: (scheduled-stop-224018) ensuring networks are active...
	I0630 15:35:40.120770 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:40.121599 1598127 main.go:141] libmachine: (scheduled-stop-224018) Ensuring network default is active
	I0630 15:35:40.122009 1598127 main.go:141] libmachine: (scheduled-stop-224018) Ensuring network mk-scheduled-stop-224018 is active
	I0630 15:35:40.122667 1598127 main.go:141] libmachine: (scheduled-stop-224018) getting domain XML...
	I0630 15:35:40.123861 1598127 main.go:141] libmachine: (scheduled-stop-224018) creating domain...
	I0630 15:35:41.409220 1598127 main.go:141] libmachine: (scheduled-stop-224018) waiting for IP...
	I0630 15:35:41.410022 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:41.410450 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:41.410561 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:41.410479 1598150 retry.go:31] will retry after 240.013955ms: waiting for domain to come up
	I0630 15:35:41.652160 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:41.653099 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:41.653111 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:41.652935 1598150 retry.go:31] will retry after 297.95894ms: waiting for domain to come up
	I0630 15:35:41.952610 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:41.953067 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:41.953120 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:41.953070 1598150 retry.go:31] will retry after 331.134947ms: waiting for domain to come up
	I0630 15:35:42.285945 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:42.286507 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:42.286532 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:42.286458 1598150 retry.go:31] will retry after 515.415453ms: waiting for domain to come up
	I0630 15:35:42.803411 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:42.804201 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:42.804218 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:42.804141 1598150 retry.go:31] will retry after 643.405446ms: waiting for domain to come up
	I0630 15:35:43.449458 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:43.450126 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:43.450148 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:43.450096 1598150 retry.go:31] will retry after 846.986577ms: waiting for domain to come up
	I0630 15:35:44.298538 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:44.299069 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:44.299088 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:44.298998 1598150 retry.go:31] will retry after 723.155998ms: waiting for domain to come up
	I0630 15:35:45.023685 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:45.024053 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:45.024081 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:45.024036 1598150 retry.go:31] will retry after 1.077595697s: waiting for domain to come up
	I0630 15:35:46.103364 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:46.103939 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:46.103967 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:46.103873 1598150 retry.go:31] will retry after 1.224013608s: waiting for domain to come up
	I0630 15:35:47.329348 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:47.329816 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:47.329837 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:47.329786 1598150 retry.go:31] will retry after 2.121986244s: waiting for domain to come up
	I0630 15:35:49.454448 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:49.455339 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:49.455354 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:49.455291 1598150 retry.go:31] will retry after 2.21566574s: waiting for domain to come up
	I0630 15:35:51.674500 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:51.674946 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:51.674998 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:51.674926 1598150 retry.go:31] will retry after 2.464139396s: waiting for domain to come up
	I0630 15:35:54.142807 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:54.143405 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:54.143432 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:54.143326 1598150 retry.go:31] will retry after 4.43586931s: waiting for domain to come up
	I0630 15:35:58.583597 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:35:58.583946 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find current IP address of domain scheduled-stop-224018 in network mk-scheduled-stop-224018
	I0630 15:35:58.583965 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | I0630 15:35:58.583888 1598150 retry.go:31] will retry after 5.014710035s: waiting for domain to come up
	I0630 15:36:03.603722 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:03.604251 1598127 main.go:141] libmachine: (scheduled-stop-224018) found domain IP: 192.168.39.39
	I0630 15:36:03.604263 1598127 main.go:141] libmachine: (scheduled-stop-224018) reserving static IP address...
	I0630 15:36:03.604275 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has current primary IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:03.604740 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | unable to find host DHCP lease matching {name: "scheduled-stop-224018", mac: "52:54:00:ff:52:a6", ip: "192.168.39.39"} in network mk-scheduled-stop-224018
	I0630 15:36:03.694801 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | Getting to WaitForSSH function...
	I0630 15:36:03.694828 1598127 main.go:141] libmachine: (scheduled-stop-224018) reserved static IP address 192.168.39.39 for domain scheduled-stop-224018
	I0630 15:36:03.694840 1598127 main.go:141] libmachine: (scheduled-stop-224018) waiting for SSH...
	I0630 15:36:03.697327 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:03.697778 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:03.697801 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:03.697961 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | Using SSH client type: external
	I0630 15:36:03.697973 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/id_rsa (-rw-------)
	I0630 15:36:03.697996 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:36:03.698006 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | About to run SSH command:
	I0630 15:36:03.698013 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | exit 0
	I0630 15:36:03.822007 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | SSH cmd err, output: <nil>: 
	I0630 15:36:03.822417 1598127 main.go:141] libmachine: (scheduled-stop-224018) KVM machine creation complete
	I0630 15:36:03.822685 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetConfigRaw
	I0630 15:36:03.823269 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:36:03.823484 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:36:03.823647 1598127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:36:03.823656 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetState
	I0630 15:36:03.825091 1598127 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:36:03.825101 1598127 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:36:03.825107 1598127 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:36:03.825114 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:03.827216 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:03.827555 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:03.827575 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:03.827678 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:03.827893 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:03.828025 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:03.828177 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:03.828327 1598127 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:03.828572 1598127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0630 15:36:03.828577 1598127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:36:03.933055 1598127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:36:03.933073 1598127 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:36:03.933082 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:03.936315 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:03.936666 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:03.936683 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:03.936874 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:03.937136 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:03.937264 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:03.937435 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:03.937564 1598127 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:03.937763 1598127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0630 15:36:03.937769 1598127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:36:04.042763 1598127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:36:04.042843 1598127 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:36:04.042861 1598127 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:36:04.042868 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetMachineName
	I0630 15:36:04.043204 1598127 buildroot.go:166] provisioning hostname "scheduled-stop-224018"
	I0630 15:36:04.043229 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetMachineName
	I0630 15:36:04.043438 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:04.046614 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.047027 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:04.047046 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.047302 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:04.047514 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:04.047661 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:04.047809 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:04.047972 1598127 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:04.048191 1598127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0630 15:36:04.048198 1598127 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-224018 && echo "scheduled-stop-224018" | sudo tee /etc/hostname
	I0630 15:36:04.170750 1598127 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-224018
	
	I0630 15:36:04.170769 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:04.173608 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.173968 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:04.173986 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.174159 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:04.174343 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:04.174524 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:04.174664 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:04.174821 1598127 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:04.175100 1598127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0630 15:36:04.175118 1598127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-224018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-224018/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-224018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:36:04.287890 1598127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:36:04.287918 1598127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:36:04.287940 1598127 buildroot.go:174] setting up certificates
	I0630 15:36:04.287968 1598127 provision.go:84] configureAuth start
	I0630 15:36:04.287978 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetMachineName
	I0630 15:36:04.288355 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetIP
	I0630 15:36:04.292407 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.292951 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:04.292972 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.293100 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:04.295499 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.295802 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:04.295816 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.295943 1598127 provision.go:143] copyHostCerts
	I0630 15:36:04.295999 1598127 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:36:04.296015 1598127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:36:04.296080 1598127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:36:04.296174 1598127 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:36:04.296178 1598127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:36:04.296205 1598127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:36:04.296257 1598127 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:36:04.296260 1598127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:36:04.296283 1598127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:36:04.296350 1598127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-224018 san=[127.0.0.1 192.168.39.39 localhost minikube scheduled-stop-224018]
	I0630 15:36:04.763808 1598127 provision.go:177] copyRemoteCerts
	I0630 15:36:04.763868 1598127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:36:04.763894 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:04.767115 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.767517 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:04.767558 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.767726 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:04.767989 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:04.768269 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:04.768487 1598127 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/id_rsa Username:docker}
	I0630 15:36:04.858100 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:36:04.890099 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0630 15:36:04.917672 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0630 15:36:04.944276 1598127 provision.go:87] duration metric: took 656.295604ms to configureAuth
	I0630 15:36:04.944301 1598127 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:36:04.944488 1598127 config.go:182] Loaded profile config "scheduled-stop-224018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:36:04.944573 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:04.949191 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.949666 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:04.949690 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:04.949929 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:04.950180 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:04.950321 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:04.950479 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:04.950646 1598127 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:04.950843 1598127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0630 15:36:04.950853 1598127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:36:05.178773 1598127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:36:05.178792 1598127 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:36:05.178800 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetURL
	I0630 15:36:05.180028 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | using libvirt version 6000000
	I0630 15:36:05.182379 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.182723 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:05.182752 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.182890 1598127 main.go:141] libmachine: Docker is up and running!
	I0630 15:36:05.182901 1598127 main.go:141] libmachine: Reticulating splines...
	I0630 15:36:05.182908 1598127 client.go:171] duration metric: took 25.721096557s to LocalClient.Create
	I0630 15:36:05.182931 1598127 start.go:167] duration metric: took 25.721154887s to libmachine.API.Create "scheduled-stop-224018"
	I0630 15:36:05.182939 1598127 start.go:293] postStartSetup for "scheduled-stop-224018" (driver="kvm2")
	I0630 15:36:05.182947 1598127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:36:05.182963 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:36:05.183268 1598127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:36:05.183287 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:05.185713 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.186132 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:05.186152 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.186405 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:05.186627 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:05.186793 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:05.186901 1598127 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/id_rsa Username:docker}
	I0630 15:36:05.274039 1598127 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:36:05.279045 1598127 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:36:05.279067 1598127 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:36:05.279156 1598127 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:36:05.279252 1598127 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:36:05.279345 1598127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:36:05.291743 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:36:05.326172 1598127 start.go:296] duration metric: took 143.191127ms for postStartSetup
	I0630 15:36:05.326235 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetConfigRaw
	I0630 15:36:05.326895 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetIP
	I0630 15:36:05.330359 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.330784 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:05.330810 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.331138 1598127 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/config.json ...
	I0630 15:36:05.331352 1598127 start.go:128] duration metric: took 25.889399621s to createHost
	I0630 15:36:05.331370 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:05.334055 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.334599 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:05.334726 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.335092 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:05.335331 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:05.335513 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:05.335664 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:05.335806 1598127 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:05.336007 1598127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0630 15:36:05.336022 1598127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:36:05.447417 1598127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751297765.424245161
	
	I0630 15:36:05.447442 1598127 fix.go:216] guest clock: 1751297765.424245161
	I0630 15:36:05.447451 1598127 fix.go:229] Guest: 2025-06-30 15:36:05.424245161 +0000 UTC Remote: 2025-06-30 15:36:05.331357938 +0000 UTC m=+26.009430573 (delta=92.887223ms)
	I0630 15:36:05.447506 1598127 fix.go:200] guest clock delta is within tolerance: 92.887223ms
	I0630 15:36:05.447511 1598127 start.go:83] releasing machines lock for "scheduled-stop-224018", held for 26.005627224s
	I0630 15:36:05.447546 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:36:05.448072 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetIP
	I0630 15:36:05.452013 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.452383 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:05.452399 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.452621 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:36:05.453338 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:36:05.453584 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:36:05.453677 1598127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:36:05.453717 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:05.453840 1598127 ssh_runner.go:195] Run: cat /version.json
	I0630 15:36:05.453860 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:05.457877 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.457911 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.458640 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:05.458679 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:05.458696 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.458706 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:05.458889 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:05.458952 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:05.459242 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:05.459301 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:05.459574 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:05.459650 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:05.459931 1598127 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/id_rsa Username:docker}
	I0630 15:36:05.459953 1598127 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/id_rsa Username:docker}
	I0630 15:36:05.539212 1598127 ssh_runner.go:195] Run: systemctl --version
	I0630 15:36:05.575847 1598127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:36:05.742518 1598127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:36:05.748839 1598127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:36:05.748905 1598127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:36:05.769138 1598127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:36:05.769154 1598127 start.go:495] detecting cgroup driver to use...
	I0630 15:36:05.769222 1598127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:36:05.789485 1598127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:36:05.807851 1598127 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:36:05.807908 1598127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:36:05.825289 1598127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:36:05.842733 1598127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:36:05.984118 1598127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:36:06.141508 1598127 docker.go:246] disabling docker service ...
	I0630 15:36:06.141584 1598127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:36:06.158439 1598127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:36:06.173277 1598127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:36:06.355581 1598127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:36:06.494613 1598127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:36:06.510614 1598127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:36:06.533944 1598127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:36:06.534015 1598127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:06.546675 1598127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:36:06.546737 1598127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:06.559345 1598127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:06.571541 1598127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:06.584045 1598127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:36:06.597065 1598127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:06.609315 1598127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:06.629926 1598127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:06.642029 1598127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:36:06.652904 1598127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:36:06.652972 1598127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:36:06.668205 1598127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:36:06.679730 1598127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:36:06.815669 1598127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:36:06.925581 1598127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:36:06.925672 1598127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:36:06.931265 1598127 start.go:563] Will wait 60s for crictl version
	I0630 15:36:06.931325 1598127 ssh_runner.go:195] Run: which crictl
	I0630 15:36:06.935712 1598127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:36:06.982720 1598127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:36:06.982798 1598127 ssh_runner.go:195] Run: crio --version
	I0630 15:36:07.011123 1598127 ssh_runner.go:195] Run: crio --version
	I0630 15:36:07.048893 1598127 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:36:07.050173 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetIP
	I0630 15:36:07.053792 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:07.054146 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:07.054161 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:07.054441 1598127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 15:36:07.058582 1598127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:36:07.073019 1598127 kubeadm.go:875] updating cluster {Name:scheduled-stop-224018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:sc
heduled-stop-224018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:36:07.073216 1598127 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:36:07.073294 1598127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:36:07.108365 1598127 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 15:36:07.108439 1598127 ssh_runner.go:195] Run: which lz4
	I0630 15:36:07.112750 1598127 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:36:07.117002 1598127 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:36:07.117036 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 15:36:08.654485 1598127 crio.go:462] duration metric: took 1.541772504s to copy over tarball
	I0630 15:36:08.654562 1598127 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:36:10.800588 1598127 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.145998362s)
	I0630 15:36:10.800608 1598127 crio.go:469] duration metric: took 2.146100517s to extract the tarball
	I0630 15:36:10.800616 1598127 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:36:10.841298 1598127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:36:10.887772 1598127 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:36:10.887836 1598127 cache_images.go:84] Images are preloaded, skipping loading
	I0630 15:36:10.887851 1598127 kubeadm.go:926] updating node { 192.168.39.39 8443 v1.33.2 crio true true} ...
	I0630 15:36:10.887991 1598127 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-224018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:scheduled-stop-224018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 15:36:10.888135 1598127 ssh_runner.go:195] Run: crio config
	I0630 15:36:10.944676 1598127 cni.go:84] Creating CNI manager for ""
	I0630 15:36:10.944689 1598127 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:36:10.944699 1598127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:36:10.944720 1598127 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-224018 NodeName:scheduled-stop-224018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:36:10.944845 1598127 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-224018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.39"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:36:10.944908 1598127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 15:36:10.956654 1598127 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:36:10.956724 1598127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:36:10.968179 1598127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0630 15:36:10.987536 1598127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:36:11.008428 1598127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0630 15:36:11.027483 1598127 ssh_runner.go:195] Run: grep 192.168.39.39	control-plane.minikube.internal$ /etc/hosts
	I0630 15:36:11.031454 1598127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:36:11.045638 1598127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:36:11.182743 1598127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:36:11.212690 1598127 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018 for IP: 192.168.39.39
	I0630 15:36:11.212704 1598127 certs.go:194] generating shared ca certs ...
	I0630 15:36:11.212721 1598127 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:11.212909 1598127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:36:11.212943 1598127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:36:11.212949 1598127 certs.go:256] generating profile certs ...
	I0630 15:36:11.213002 1598127 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/client.key
	I0630 15:36:11.213035 1598127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/client.crt with IP's: []
	I0630 15:36:11.675514 1598127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/client.crt ...
	I0630 15:36:11.675538 1598127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/client.crt: {Name:mka42d3a8103694b2b907a570a120f9d88bafc5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:11.675734 1598127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/client.key ...
	I0630 15:36:11.675741 1598127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/client.key: {Name:mk672ebc8c6fe5568a13aa7e10b6ab231abc6cae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:11.675819 1598127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.key.28ef4f7f
	I0630 15:36:11.675830 1598127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.crt.28ef4f7f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39]
	I0630 15:36:11.872473 1598127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.crt.28ef4f7f ...
	I0630 15:36:11.872492 1598127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.crt.28ef4f7f: {Name:mk6149e8ee79e0632f9557071ce5d1e394875798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:11.872674 1598127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.key.28ef4f7f ...
	I0630 15:36:11.872683 1598127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.key.28ef4f7f: {Name:mk468314d8ae1e5d6db8cf8d9bb408336fdffc8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:11.872764 1598127 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.crt.28ef4f7f -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.crt
	I0630 15:36:11.872838 1598127 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.key.28ef4f7f -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.key
	I0630 15:36:11.872884 1598127 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/proxy-client.key
	I0630 15:36:11.872895 1598127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/proxy-client.crt with IP's: []
	I0630 15:36:12.211634 1598127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/proxy-client.crt ...
	I0630 15:36:12.211654 1598127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/proxy-client.crt: {Name:mke057918acf57ab407376d1bf07c84bb79c0677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:12.211828 1598127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/proxy-client.key ...
	I0630 15:36:12.211835 1598127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/proxy-client.key: {Name:mk3c822bde25d546e13a0720f53b041c54921ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:12.212049 1598127 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:36:12.212082 1598127 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:36:12.212088 1598127 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:36:12.212107 1598127 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:36:12.212131 1598127 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:36:12.212154 1598127 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:36:12.212201 1598127 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:36:12.212781 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:36:12.246758 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:36:12.276707 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:36:12.307245 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:36:12.339782 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0630 15:36:12.368310 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0630 15:36:12.401577 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:36:12.433176 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/scheduled-stop-224018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0630 15:36:12.464645 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:36:12.495856 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:36:12.527334 1598127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:36:12.560848 1598127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:36:12.583134 1598127 ssh_runner.go:195] Run: openssl version
	I0630 15:36:12.591851 1598127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:36:12.606620 1598127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:36:12.612286 1598127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:36:12.612346 1598127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:36:12.619889 1598127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:36:12.635083 1598127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:36:12.647207 1598127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:36:12.652347 1598127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:36:12.652425 1598127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:36:12.659724 1598127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:36:12.672311 1598127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:36:12.684513 1598127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:36:12.689530 1598127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:36:12.689599 1598127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:36:12.696644 1598127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:36:12.708764 1598127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:36:12.713323 1598127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:36:12.713373 1598127 kubeadm.go:392] StartCluster: {Name:scheduled-stop-224018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:sched
uled-stop-224018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:36:12.713451 1598127 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:36:12.713515 1598127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:36:12.751256 1598127 cri.go:89] found id: ""
	I0630 15:36:12.751320 1598127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:36:12.762782 1598127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:36:12.774562 1598127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:36:12.787193 1598127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:36:12.787204 1598127 kubeadm.go:157] found existing configuration files:
	
	I0630 15:36:12.787253 1598127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:36:12.798660 1598127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:36:12.798718 1598127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:36:12.810872 1598127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:36:12.822012 1598127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:36:12.822066 1598127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:36:12.834433 1598127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:36:12.846171 1598127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:36:12.846227 1598127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:36:12.858917 1598127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:36:12.870857 1598127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:36:12.870911 1598127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:36:12.882765 1598127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:36:12.940366 1598127 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 15:36:12.940411 1598127 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:36:13.054996 1598127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:36:13.055134 1598127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:36:13.055426 1598127 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 15:36:13.072390 1598127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:36:13.115771 1598127 out.go:235]   - Generating certificates and keys ...
	I0630 15:36:13.115907 1598127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:36:13.115993 1598127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:36:13.256404 1598127 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 15:36:13.609379 1598127 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 15:36:13.780094 1598127 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 15:36:13.888419 1598127 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 15:36:14.215967 1598127 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 15:36:14.216120 1598127 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-224018] and IPs [192.168.39.39 127.0.0.1 ::1]
	I0630 15:36:14.711945 1598127 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 15:36:14.712167 1598127 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-224018] and IPs [192.168.39.39 127.0.0.1 ::1]
	I0630 15:36:14.924190 1598127 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 15:36:15.087012 1598127 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 15:36:15.266021 1598127 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 15:36:15.266152 1598127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:36:15.445631 1598127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:36:15.709616 1598127 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 15:36:15.924925 1598127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:36:16.024746 1598127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:36:16.248617 1598127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:36:16.248973 1598127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:36:16.251170 1598127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:36:16.253336 1598127 out.go:235]   - Booting up control plane ...
	I0630 15:36:16.253538 1598127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:36:16.253661 1598127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:36:16.253762 1598127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:36:16.269736 1598127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:36:16.275845 1598127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:36:16.275910 1598127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:36:16.441262 1598127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 15:36:16.441374 1598127 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 15:36:17.442398 1598127 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001501203s
	I0630 15:36:17.445164 1598127 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 15:36:17.445283 1598127 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.39:8443/livez
	I0630 15:36:17.445440 1598127 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 15:36:17.445890 1598127 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 15:36:19.785094 1598127 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.340171383s
	I0630 15:36:21.245161 1598127 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.800654037s
	I0630 15:36:22.946012 1598127 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.502931413s
	I0630 15:36:22.960276 1598127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 15:36:22.980551 1598127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 15:36:23.015666 1598127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 15:36:23.015846 1598127 kubeadm.go:310] [mark-control-plane] Marking the node scheduled-stop-224018 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 15:36:23.028973 1598127 kubeadm.go:310] [bootstrap-token] Using token: g6ilbr.u7n15g9wcop1cjfb
	I0630 15:36:23.030641 1598127 out.go:235]   - Configuring RBAC rules ...
	I0630 15:36:23.030814 1598127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 15:36:23.036051 1598127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 15:36:23.048170 1598127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 15:36:23.052663 1598127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 15:36:23.057792 1598127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 15:36:23.061907 1598127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 15:36:23.354282 1598127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 15:36:23.782462 1598127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 15:36:24.353781 1598127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 15:36:24.354639 1598127 kubeadm.go:310] 
	I0630 15:36:24.354714 1598127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 15:36:24.354718 1598127 kubeadm.go:310] 
	I0630 15:36:24.354782 1598127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 15:36:24.354787 1598127 kubeadm.go:310] 
	I0630 15:36:24.354820 1598127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 15:36:24.354904 1598127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 15:36:24.354996 1598127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 15:36:24.355008 1598127 kubeadm.go:310] 
	I0630 15:36:24.355092 1598127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 15:36:24.355098 1598127 kubeadm.go:310] 
	I0630 15:36:24.355165 1598127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 15:36:24.355170 1598127 kubeadm.go:310] 
	I0630 15:36:24.355237 1598127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 15:36:24.355334 1598127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 15:36:24.355430 1598127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 15:36:24.355435 1598127 kubeadm.go:310] 
	I0630 15:36:24.355549 1598127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 15:36:24.355658 1598127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 15:36:24.355666 1598127 kubeadm.go:310] 
	I0630 15:36:24.355742 1598127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g6ilbr.u7n15g9wcop1cjfb \
	I0630 15:36:24.355849 1598127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 15:36:24.355877 1598127 kubeadm.go:310] 	--control-plane 
	I0630 15:36:24.355882 1598127 kubeadm.go:310] 
	I0630 15:36:24.356023 1598127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 15:36:24.356032 1598127 kubeadm.go:310] 
	I0630 15:36:24.356137 1598127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g6ilbr.u7n15g9wcop1cjfb \
	I0630 15:36:24.356289 1598127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 15:36:24.357575 1598127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:36:24.357600 1598127 cni.go:84] Creating CNI manager for ""
	I0630 15:36:24.357607 1598127 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:36:24.359395 1598127 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 15:36:24.360967 1598127 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 15:36:24.373296 1598127 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 15:36:24.395144 1598127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:36:24.395250 1598127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:36:24.395293 1598127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-224018 minikube.k8s.io/updated_at=2025_06_30T15_36_24_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=scheduled-stop-224018 minikube.k8s.io/primary=true
	I0630 15:36:24.509673 1598127 kubeadm.go:1105] duration metric: took 114.504904ms to wait for elevateKubeSystemPrivileges
	I0630 15:36:24.509702 1598127 ops.go:34] apiserver oom_adj: -16
	I0630 15:36:24.538732 1598127 kubeadm.go:394] duration metric: took 11.825352706s to StartCluster
	I0630 15:36:24.538775 1598127 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:24.538860 1598127 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:36:24.539702 1598127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:24.540006 1598127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 15:36:24.540018 1598127 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:36:24.540091 1598127 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:36:24.540194 1598127 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-224018"
	I0630 15:36:24.540196 1598127 config.go:182] Loaded profile config "scheduled-stop-224018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:36:24.540213 1598127 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-224018"
	I0630 15:36:24.540243 1598127 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-224018"
	I0630 15:36:24.540256 1598127 host.go:66] Checking if "scheduled-stop-224018" exists ...
	I0630 15:36:24.540273 1598127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-224018"
	I0630 15:36:24.540768 1598127 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:36:24.540768 1598127 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:36:24.540793 1598127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:36:24.540795 1598127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:36:24.541708 1598127 out.go:177] * Verifying Kubernetes components...
	I0630 15:36:24.543604 1598127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:36:24.559107 1598127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
	I0630 15:36:24.559418 1598127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40891
	I0630 15:36:24.559666 1598127 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:36:24.559954 1598127 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:36:24.560211 1598127 main.go:141] libmachine: Using API Version  1
	I0630 15:36:24.560231 1598127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:36:24.560459 1598127 main.go:141] libmachine: Using API Version  1
	I0630 15:36:24.560471 1598127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:36:24.560578 1598127 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:36:24.560764 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetState
	I0630 15:36:24.560832 1598127 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:36:24.561491 1598127 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:36:24.561542 1598127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:36:24.564359 1598127 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-224018"
	I0630 15:36:24.564397 1598127 host.go:66] Checking if "scheduled-stop-224018" exists ...
	I0630 15:36:24.564779 1598127 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:36:24.564825 1598127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:36:24.580148 1598127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
	I0630 15:36:24.580686 1598127 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:36:24.581250 1598127 main.go:141] libmachine: Using API Version  1
	I0630 15:36:24.581272 1598127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:36:24.581743 1598127 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:36:24.581977 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetState
	I0630 15:36:24.582088 1598127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I0630 15:36:24.582590 1598127 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:36:24.583164 1598127 main.go:141] libmachine: Using API Version  1
	I0630 15:36:24.583183 1598127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:36:24.583569 1598127 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:36:24.583748 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:36:24.584169 1598127 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:36:24.584212 1598127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:36:24.585932 1598127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:36:24.587628 1598127 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:36:24.587639 1598127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 15:36:24.587659 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:24.591814 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:24.592277 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:24.592302 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:24.592581 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:24.592826 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:24.593048 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:24.593284 1598127 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/id_rsa Username:docker}
	I0630 15:36:24.604974 1598127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0630 15:36:24.605616 1598127 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:36:24.606280 1598127 main.go:141] libmachine: Using API Version  1
	I0630 15:36:24.606303 1598127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:36:24.606650 1598127 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:36:24.606826 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetState
	I0630 15:36:24.609048 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .DriverName
	I0630 15:36:24.609391 1598127 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 15:36:24.609438 1598127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 15:36:24.609459 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHHostname
	I0630 15:36:24.614253 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:24.614745 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:52:a6", ip: ""} in network mk-scheduled-stop-224018: {Iface:virbr1 ExpiryTime:2025-06-30 16:35:54 +0000 UTC Type:0 Mac:52:54:00:ff:52:a6 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:scheduled-stop-224018 Clientid:01:52:54:00:ff:52:a6}
	I0630 15:36:24.614770 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | domain scheduled-stop-224018 has defined IP address 192.168.39.39 and MAC address 52:54:00:ff:52:a6 in network mk-scheduled-stop-224018
	I0630 15:36:24.614971 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHPort
	I0630 15:36:24.615288 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHKeyPath
	I0630 15:36:24.615453 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .GetSSHUsername
	I0630 15:36:24.615581 1598127 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/scheduled-stop-224018/id_rsa Username:docker}
	I0630 15:36:24.736533 1598127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 15:36:24.821553 1598127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:36:24.994907 1598127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:36:25.010292 1598127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 15:36:25.238957 1598127 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 15:36:25.239943 1598127 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:36:25.239989 1598127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:36:25.479660 1598127 main.go:141] libmachine: Making call to close driver server
	I0630 15:36:25.479672 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .Close
	I0630 15:36:25.479741 1598127 main.go:141] libmachine: Making call to close driver server
	I0630 15:36:25.479755 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .Close
	I0630 15:36:25.479755 1598127 api_server.go:72] duration metric: took 939.711804ms to wait for apiserver process to appear ...
	I0630 15:36:25.479763 1598127 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:36:25.479779 1598127 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I0630 15:36:25.480038 1598127 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:36:25.480048 1598127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:36:25.480055 1598127 main.go:141] libmachine: Making call to close driver server
	I0630 15:36:25.480061 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .Close
	I0630 15:36:25.480196 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | Closing plugin on server side
	I0630 15:36:25.480220 1598127 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:36:25.480229 1598127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:36:25.480248 1598127 main.go:141] libmachine: Making call to close driver server
	I0630 15:36:25.480254 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .Close
	I0630 15:36:25.480373 1598127 main.go:141] libmachine: (scheduled-stop-224018) DBG | Closing plugin on server side
	I0630 15:36:25.480398 1598127 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:36:25.480404 1598127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:36:25.480441 1598127 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:36:25.480447 1598127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:36:25.491964 1598127 api_server.go:279] https://192.168.39.39:8443/healthz returned 200:
	ok
	I0630 15:36:25.494232 1598127 api_server.go:141] control plane version: v1.33.2
	I0630 15:36:25.494251 1598127 api_server.go:131] duration metric: took 14.482219ms to wait for apiserver health ...
	I0630 15:36:25.494259 1598127 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:36:25.505813 1598127 system_pods.go:59] 5 kube-system pods found
	I0630 15:36:25.505835 1598127 system_pods.go:61] "etcd-scheduled-stop-224018" [2b25d0dc-1dad-41f9-9bc6-b22ebe2fed5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0630 15:36:25.505845 1598127 system_pods.go:61] "kube-apiserver-scheduled-stop-224018" [c3c495e2-727e-429d-8789-093d9eee023e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:36:25.505856 1598127 system_pods.go:61] "kube-controller-manager-scheduled-stop-224018" [b0bcd396-7dc9-474b-9e38-a395e70e540f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:36:25.505861 1598127 system_pods.go:61] "kube-scheduler-scheduled-stop-224018" [e27bc99c-f9e4-42c2-b6f5-10196e66e434] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:36:25.505865 1598127 system_pods.go:61] "storage-provisioner" [7e36d2af-be3f-48a3-8dcd-6903a4e5a3e3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0630 15:36:25.505871 1598127 system_pods.go:74] duration metric: took 11.607387ms to wait for pod list to return data ...
	I0630 15:36:25.505883 1598127 kubeadm.go:578] duration metric: took 965.840285ms to wait for: map[apiserver:true system_pods:true]
	I0630 15:36:25.505893 1598127 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:36:25.506292 1598127 main.go:141] libmachine: Making call to close driver server
	I0630 15:36:25.506300 1598127 main.go:141] libmachine: (scheduled-stop-224018) Calling .Close
	I0630 15:36:25.506656 1598127 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:36:25.506683 1598127 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:36:25.509363 1598127 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0630 15:36:25.510460 1598127 addons.go:514] duration metric: took 970.373658ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0630 15:36:25.510867 1598127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:36:25.510883 1598127 node_conditions.go:123] node cpu capacity is 2
	I0630 15:36:25.510894 1598127 node_conditions.go:105] duration metric: took 4.997874ms to run NodePressure ...
	I0630 15:36:25.510905 1598127 start.go:241] waiting for startup goroutines ...
	I0630 15:36:25.742925 1598127 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-224018" context rescaled to 1 replicas
	I0630 15:36:25.742955 1598127 start.go:246] waiting for cluster config update ...
	I0630 15:36:25.742964 1598127 start.go:255] writing updated cluster config ...
	I0630 15:36:25.743246 1598127 ssh_runner.go:195] Run: rm -f paused
	I0630 15:36:25.796279 1598127 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:36:25.798268 1598127 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-224018" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.909859719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751297786909832614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df9d735a-a1a2-4672-ae36-c9ab9f34314b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.910826528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2a4d5ac-4168-48bc-ba3b-59e1ee123915 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.910889723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2a4d5ac-4168-48bc-ba3b-59e1ee123915 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.911026098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92611c4b481c38d47b9a6ca3eb42fd3d15e645d673b95b176ac011fe7c59450c,PodSandboxId:168e4df9d86c5889be26dae84f953cb91b4f62a165f6d01be107321dc098ae77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751297777987653157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362ff85d31ee587ac8ae484a485685c6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ca137d4b211fe8863b5b359ad6e8407f3be5a097e6a834e2318c419b23ff5f,PodSandboxId:bb6d9183ef880c880fcb8a0e077f79e0de68e118455ca1c6f334f912865a48f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751297778028082376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b72f4e40b1eac1d58df80fa8dee9296e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7509a3b6fb7ad66fe256bd4158dbcfc63e4a37beef971324238785d1f353956e,PodSandboxId:b761f538b72babc62bec4376065c6ba77684d664606f8a0983eb99ca3caef1a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751297777969336672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c04ab57ded1720eff1cae2d57e68c4a,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa13493a20058c3c4ec57fc0f1f13125ed2c2a1f989e62030af35440b39d1f9,PodSandboxId:563d49c5936b1b4e7937d227ba9508d9a905f1862e10c51793ac3dbdfef0932c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751297777917576426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57a63382adc4330da45b99d77c7b53b,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2a4d5ac-4168-48bc-ba3b-59e1ee123915 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.952844341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9ca7cc0-cb74-44a7-9699-3d24af708553 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.952931825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9ca7cc0-cb74-44a7-9699-3d24af708553 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.954447627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1bd3644c-edbe-4d17-812d-4df43b3be534 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.955148851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751297786955124813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bd3644c-edbe-4d17-812d-4df43b3be534 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.956019645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd933d54-4810-4e2f-b943-c471d520cada name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.956110748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd933d54-4810-4e2f-b943-c471d520cada name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.956306604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92611c4b481c38d47b9a6ca3eb42fd3d15e645d673b95b176ac011fe7c59450c,PodSandboxId:168e4df9d86c5889be26dae84f953cb91b4f62a165f6d01be107321dc098ae77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751297777987653157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362ff85d31ee587ac8ae484a485685c6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ca137d4b211fe8863b5b359ad6e8407f3be5a097e6a834e2318c419b23ff5f,PodSandboxId:bb6d9183ef880c880fcb8a0e077f79e0de68e118455ca1c6f334f912865a48f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751297778028082376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b72f4e40b1eac1d58df80fa8dee9296e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7509a3b6fb7ad66fe256bd4158dbcfc63e4a37beef971324238785d1f353956e,PodSandboxId:b761f538b72babc62bec4376065c6ba77684d664606f8a0983eb99ca3caef1a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751297777969336672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c04ab57ded1720eff1cae2d57e68c4a,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa13493a20058c3c4ec57fc0f1f13125ed2c2a1f989e62030af35440b39d1f9,PodSandboxId:563d49c5936b1b4e7937d227ba9508d9a905f1862e10c51793ac3dbdfef0932c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751297777917576426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57a63382adc4330da45b99d77c7b53b,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd933d54-4810-4e2f-b943-c471d520cada name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.996033163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1a91f63-373b-44bf-aeec-8676cc42ca87 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.996225635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1a91f63-373b-44bf-aeec-8676cc42ca87 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.997404683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=830fe81a-4526-405b-a07c-0dc03ef2ccd3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.997777290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751297786997753803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=830fe81a-4526-405b-a07c-0dc03ef2ccd3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.998310486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cf035a1-9d79-4b94-8729-69cf202b0842 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.998356122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cf035a1-9d79-4b94-8729-69cf202b0842 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:26 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:26.998507413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92611c4b481c38d47b9a6ca3eb42fd3d15e645d673b95b176ac011fe7c59450c,PodSandboxId:168e4df9d86c5889be26dae84f953cb91b4f62a165f6d01be107321dc098ae77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751297777987653157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362ff85d31ee587ac8ae484a485685c6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ca137d4b211fe8863b5b359ad6e8407f3be5a097e6a834e2318c419b23ff5f,PodSandboxId:bb6d9183ef880c880fcb8a0e077f79e0de68e118455ca1c6f334f912865a48f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751297778028082376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b72f4e40b1eac1d58df80fa8dee9296e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7509a3b6fb7ad66fe256bd4158dbcfc63e4a37beef971324238785d1f353956e,PodSandboxId:b761f538b72babc62bec4376065c6ba77684d664606f8a0983eb99ca3caef1a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751297777969336672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c04ab57ded1720eff1cae2d57e68c4a,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa13493a20058c3c4ec57fc0f1f13125ed2c2a1f989e62030af35440b39d1f9,PodSandboxId:563d49c5936b1b4e7937d227ba9508d9a905f1862e10c51793ac3dbdfef0932c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751297777917576426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57a63382adc4330da45b99d77c7b53b,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cf035a1-9d79-4b94-8729-69cf202b0842 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:27 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:27.035634351Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8cc2a782-5259-4f09-824a-54f79ba3918b name=/runtime.v1.RuntimeService/Version
	Jun 30 15:36:27 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:27.035704297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cc2a782-5259-4f09-824a-54f79ba3918b name=/runtime.v1.RuntimeService/Version
	Jun 30 15:36:27 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:27.038293883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=022c51bf-e9a5-49aa-a8ec-eb438df57461 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:36:27 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:27.040339261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751297787040273293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=022c51bf-e9a5-49aa-a8ec-eb438df57461 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:36:27 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:27.041929553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=927348d9-3ef3-4090-a359-70746d06fd96 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:27 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:27.042001507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=927348d9-3ef3-4090-a359-70746d06fd96 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:36:27 scheduled-stop-224018 crio[860]: time="2025-06-30 15:36:27.042123776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92611c4b481c38d47b9a6ca3eb42fd3d15e645d673b95b176ac011fe7c59450c,PodSandboxId:168e4df9d86c5889be26dae84f953cb91b4f62a165f6d01be107321dc098ae77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751297777987653157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362ff85d31ee587ac8ae484a485685c6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ca137d4b211fe8863b5b359ad6e8407f3be5a097e6a834e2318c419b23ff5f,PodSandboxId:bb6d9183ef880c880fcb8a0e077f79e0de68e118455ca1c6f334f912865a48f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751297778028082376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b72f4e40b1eac1d58df80fa8dee9296e,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7509a3b6fb7ad66fe256bd4158dbcfc63e4a37beef971324238785d1f353956e,PodSandboxId:b761f538b72babc62bec4376065c6ba77684d664606f8a0983eb99ca3caef1a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751297777969336672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c04ab57ded1720eff1cae2d57e68c4a,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa13493a20058c3c4ec57fc0f1f13125ed2c2a1f989e62030af35440b39d1f9,PodSandboxId:563d49c5936b1b4e7937d227ba9508d9a905f1862e10c51793ac3dbdfef0932c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751297777917576426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-scheduled-stop-224018,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57a63382adc4330da45b99d77c7b53b,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=927348d9-3ef3-4090-a359-70746d06fd96 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a7ca137d4b211       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   9 seconds ago       Running             etcd                      0                   bb6d9183ef880       etcd-scheduled-stop-224018
	92611c4b481c3       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b   9 seconds ago       Running             kube-scheduler            0                   168e4df9d86c5       kube-scheduler-scheduled-stop-224018
	7509a3b6fb7ad       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2   9 seconds ago       Running             kube-controller-manager   0                   b761f538b72ba       kube-controller-manager-scheduled-stop-224018
	3aa13493a2005       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e   9 seconds ago       Running             kube-apiserver            0                   563d49c5936b1       kube-apiserver-scheduled-stop-224018
	
	
	==> describe nodes <==
	Name:               scheduled-stop-224018
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=scheduled-stop-224018
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=scheduled-stop-224018
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T15_36_24_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 15:36:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-224018
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 15:36:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 15:36:24 +0000   Mon, 30 Jun 2025 15:36:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 15:36:24 +0000   Mon, 30 Jun 2025 15:36:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 15:36:24 +0000   Mon, 30 Jun 2025 15:36:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 15:36:24 +0000   Mon, 30 Jun 2025 15:36:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    scheduled-stop-224018
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f3bbc4e31a14a4e90a9d867d32ea1f8
	  System UUID:                0f3bbc4e-31a1-4a4e-90a9-d867d32ea1f8
	  Boot ID:                    f8e6d6ab-066d-4d2c-95b1-eb4bf7ee020f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-224018                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-224018             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-224018    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-224018             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (3%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet  Node scheduled-stop-224018 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet  Node scheduled-stop-224018 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet  Node scheduled-stop-224018 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 4s                 kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s                 kubelet  Node scheduled-stop-224018 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet  Node scheduled-stop-224018 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet  Node scheduled-stop-224018 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s                 kubelet  Node scheduled-stop-224018 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun30 15:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001159] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000045] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.151568] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun30 15:36] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096411] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.153190] kauditd_printk_skb: 67 callbacks suppressed
	
	
	==> etcd [a7ca137d4b211fe8863b5b359ad6e8407f3be5a097e6a834e2318c419b23ff5f] <==
	{"level":"info","ts":"2025-06-30T15:36:18.362332Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-06-30T15:36:18.365190Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"38979a8318efbb8d","initial-advertise-peer-urls":["https://192.168.39.39:2380"],"listen-peer-urls":["https://192.168.39.39:2380"],"advertise-client-urls":["https://192.168.39.39:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.39:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-06-30T15:36:18.365245Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-06-30T15:36:18.364915Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.39:2380"}
	{"level":"info","ts":"2025-06-30T15:36:18.365279Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.39:2380"}
	{"level":"info","ts":"2025-06-30T15:36:18.995453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d is starting a new election at term 1"}
	{"level":"info","ts":"2025-06-30T15:36:18.995512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d became pre-candidate at term 1"}
	{"level":"info","ts":"2025-06-30T15:36:18.995542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d received MsgPreVoteResp from 38979a8318efbb8d at term 1"}
	{"level":"info","ts":"2025-06-30T15:36:18.995565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d became candidate at term 2"}
	{"level":"info","ts":"2025-06-30T15:36:18.995609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d received MsgVoteResp from 38979a8318efbb8d at term 2"}
	{"level":"info","ts":"2025-06-30T15:36:18.995621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d became leader at term 2"}
	{"level":"info","ts":"2025-06-30T15:36:18.995632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38979a8318efbb8d elected leader 38979a8318efbb8d at term 2"}
	{"level":"info","ts":"2025-06-30T15:36:18.999546Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"38979a8318efbb8d","local-member-attributes":"{Name:scheduled-stop-224018 ClientURLs:[https://192.168.39.39:2379]}","request-path":"/0/members/38979a8318efbb8d/attributes","cluster-id":"9d46469dd2e6eab1","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T15:36:18.999657Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:36:19.000105Z","caller":"etcdserver/server.go:2697","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:36:19.000280Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:36:19.004887Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T15:36:19.005443Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T15:36:19.005526Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T15:36:19.007565Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.39:2379"}
	{"level":"info","ts":"2025-06-30T15:36:19.007940Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"9d46469dd2e6eab1","local-member-id":"38979a8318efbb8d","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:36:19.008035Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:36:19.008070Z","caller":"etcdserver/server.go:2721","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:36:19.009668Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T15:36:19.010627Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:36:27 up 0 min,  0 users,  load average: 0.75, 0.20, 0.07
	Linux scheduled-stop-224018 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3aa13493a20058c3c4ec57fc0f1f13125ed2c2a1f989e62030af35440b39d1f9] <==
	I0630 15:36:20.729052       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0630 15:36:20.729072       1 cache.go:39] Caches are synced for autoregister controller
	I0630 15:36:20.731459       1 shared_informer.go:357] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0630 15:36:20.731505       1 policy_source.go:240] refreshing policies
	I0630 15:36:20.731574       1 shared_informer.go:357] "Caches are synced" controller="ipallocator-repair-controller"
	I0630 15:36:20.732660       1 controller.go:667] quota admission added evaluator for: namespaces
	I0630 15:36:20.733472       1 shared_informer.go:357] "Caches are synced" controller="configmaps"
	I0630 15:36:20.765547       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0630 15:36:20.865610       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 15:36:20.865717       1 default_servicecidr_controller.go:214] Setting default ServiceCIDR condition Ready to True
	I0630 15:36:20.877196       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 15:36:20.877509       1 default_servicecidr_controller.go:136] Shutting down kubernetes-service-cidr-controller
	I0630 15:36:21.640857       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0630 15:36:21.650705       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0630 15:36:21.650741       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0630 15:36:22.416538       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0630 15:36:22.476007       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0630 15:36:22.542156       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0630 15:36:22.554114       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39]
	I0630 15:36:22.555156       1 controller.go:667] quota admission added evaluator for: endpoints
	I0630 15:36:22.576240       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0630 15:36:22.695680       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0630 15:36:23.754898       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0630 15:36:23.772432       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0630 15:36:23.794632       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7509a3b6fb7ad66fe256bd4158dbcfc63e4a37beef971324238785d1f353956e] <==
	I0630 15:36:27.353225       1 shared_informer.go:357] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0630 15:36:27.365852       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="scheduled-stop-224018" podCIDRs=["10.244.0.0/24"]
	I0630 15:36:27.366263       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0630 15:36:27.383941       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0630 15:36:27.390455       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0630 15:36:27.390717       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0630 15:36:27.390855       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="scheduled-stop-224018"
	I0630 15:36:27.390958       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0630 15:36:27.392279       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0630 15:36:27.392347       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0630 15:36:27.392534       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 15:36:27.392593       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0630 15:36:27.392690       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0630 15:36:27.393008       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 15:36:27.393065       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0630 15:36:27.393214       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 15:36:27.393948       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0630 15:36:27.394501       1 shared_informer.go:357] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0630 15:36:27.396707       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0630 15:36:27.397322       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0630 15:36:27.401863       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0630 15:36:27.404119       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0630 15:36:27.417615       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0630 15:36:27.445419       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0630 15:36:27.445542       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	
	
	==> kube-scheduler [92611c4b481c38d47b9a6ca3eb42fd3d15e645d673b95b176ac011fe7c59450c] <==
	W0630 15:36:21.201947       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 15:36:21.217436       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 15:36:21.218215       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 15:36:21.224524       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 15:36:21.224588       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 15:36:21.226932       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 15:36:21.227039       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0630 15:36:21.236149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 15:36:21.241098       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 15:36:21.241519       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 15:36:21.242827       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 15:36:21.242996       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 15:36:21.243080       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 15:36:21.243193       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 15:36:21.243276       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 15:36:21.243488       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 15:36:21.243566       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 15:36:21.243615       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 15:36:21.243659       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 15:36:21.243686       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 15:36:21.243943       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 15:36:21.244088       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0630 15:36:21.244179       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 15:36:22.071607       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I0630 15:36:22.625747       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 15:36:23 scheduled-stop-224018 kubelet[1580]: I0630 15:36:23.896463    1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0c04ab57ded1720eff1cae2d57e68c4a-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-224018\" (UID: \"0c04ab57ded1720eff1cae2d57e68c4a\") " pod="kube-system/kube-controller-manager-scheduled-stop-224018"
	Jun 30 15:36:23 scheduled-stop-224018 kubelet[1580]: I0630 15:36:23.896489    1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b72f4e40b1eac1d58df80fa8dee9296e-etcd-data\") pod \"etcd-scheduled-stop-224018\" (UID: \"b72f4e40b1eac1d58df80fa8dee9296e\") " pod="kube-system/etcd-scheduled-stop-224018"
	Jun 30 15:36:23 scheduled-stop-224018 kubelet[1580]: I0630 15:36:23.896507    1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a57a63382adc4330da45b99d77c7b53b-k8s-certs\") pod \"kube-apiserver-scheduled-stop-224018\" (UID: \"a57a63382adc4330da45b99d77c7b53b\") " pod="kube-system/kube-apiserver-scheduled-stop-224018"
	Jun 30 15:36:23 scheduled-stop-224018 kubelet[1580]: I0630 15:36:23.896523    1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a57a63382adc4330da45b99d77c7b53b-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-224018\" (UID: \"a57a63382adc4330da45b99d77c7b53b\") " pod="kube-system/kube-apiserver-scheduled-stop-224018"
	Jun 30 15:36:23 scheduled-stop-224018 kubelet[1580]: I0630 15:36:23.896537    1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c04ab57ded1720eff1cae2d57e68c4a-ca-certs\") pod \"kube-controller-manager-scheduled-stop-224018\" (UID: \"0c04ab57ded1720eff1cae2d57e68c4a\") " pod="kube-system/kube-controller-manager-scheduled-stop-224018"
	Jun 30 15:36:23 scheduled-stop-224018 kubelet[1580]: I0630 15:36:23.896554    1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c04ab57ded1720eff1cae2d57e68c4a-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-224018\" (UID: \"0c04ab57ded1720eff1cae2d57e68c4a\") " pod="kube-system/kube-controller-manager-scheduled-stop-224018"
	Jun 30 15:36:23 scheduled-stop-224018 kubelet[1580]: I0630 15:36:23.902288    1580 kubelet_node_status.go:124] "Node was previously registered" node="scheduled-stop-224018"
	Jun 30 15:36:23 scheduled-stop-224018 kubelet[1580]: I0630 15:36:23.902521    1580 kubelet_node_status.go:78] "Successfully registered node" node="scheduled-stop-224018"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.669664    1580 apiserver.go:52] "Watching apiserver"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.692521    1580 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.761726    1580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-scheduled-stop-224018"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.762325    1580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-224018"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.762729    1580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-224018"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.763098    1580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-224018"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: E0630 15:36:24.796207    1580 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-224018\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-224018"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: E0630 15:36:24.799480    1580 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-224018\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-224018"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: E0630 15:36:24.808494    1580 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-224018\" already exists" pod="kube-system/etcd-scheduled-stop-224018"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: E0630 15:36:24.808717    1580 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-scheduled-stop-224018\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-224018"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.841407    1580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-224018" podStartSLOduration=1.841354678 podStartE2EDuration="1.841354678s" podCreationTimestamp="2025-06-30 15:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-30 15:36:24.840773026 +0000 UTC m=+1.251912989" watchObservedRunningTime="2025-06-30 15:36:24.841354678 +0000 UTC m=+1.252494636"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.889763    1580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-224018" podStartSLOduration=1.889745703 podStartE2EDuration="1.889745703s" podCreationTimestamp="2025-06-30 15:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-30 15:36:24.8874482 +0000 UTC m=+1.298588178" watchObservedRunningTime="2025-06-30 15:36:24.889745703 +0000 UTC m=+1.300885665"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.889971    1580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-224018" podStartSLOduration=1.889885636 podStartE2EDuration="1.889885636s" podCreationTimestamp="2025-06-30 15:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-30 15:36:24.862443045 +0000 UTC m=+1.273583022" watchObservedRunningTime="2025-06-30 15:36:24.889885636 +0000 UTC m=+1.301025594"
	Jun 30 15:36:24 scheduled-stop-224018 kubelet[1580]: I0630 15:36:24.956566    1580 kubelet_node_status.go:501] "Fast updating node status as it just became ready"
	Jun 30 15:36:27 scheduled-stop-224018 kubelet[1580]: I0630 15:36:27.436128    1580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-224018" podStartSLOduration=4.436111619 podStartE2EDuration="4.436111619s" podCreationTimestamp="2025-06-30 15:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-30 15:36:24.911622664 +0000 UTC m=+1.322762644" watchObservedRunningTime="2025-06-30 15:36:27.436111619 +0000 UTC m=+3.847251617"
	Jun 30 15:36:27 scheduled-stop-224018 kubelet[1580]: I0630 15:36:27.523489    1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7e36d2af-be3f-48a3-8dcd-6903a4e5a3e3-tmp\") pod \"storage-provisioner\" (UID: \"7e36d2af-be3f-48a3-8dcd-6903a4e5a3e3\") " pod="kube-system/storage-provisioner"
	Jun 30 15:36:27 scheduled-stop-224018 kubelet[1580]: I0630 15:36:27.523533    1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7nnr\" (UniqueName: \"kubernetes.io/projected/7e36d2af-be3f-48a3-8dcd-6903a4e5a3e3-kube-api-access-l7nnr\") pod \"storage-provisioner\" (UID: \"7e36d2af-be3f-48a3-8dcd-6903a4e5a3e3\") " pod="kube-system/storage-provisioner"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p scheduled-stop-224018 -n scheduled-stop-224018
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-224018 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-proxy-fsp6r storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-224018 describe pod kube-proxy-fsp6r storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-224018 describe pod kube-proxy-fsp6r storage-provisioner: exit status 1 (74.087904ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-proxy-fsp6r" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-224018 describe pod kube-proxy-fsp6r storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-224018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-224018
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-224018: (1.091132731s)
--- FAIL: TestScheduledStopUnix (49.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (421.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-691468 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-691468 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m32.375051176s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-691468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-691468" primary control-plane node in "kubernetes-upgrade-691468" cluster
	* Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:36:29.268644 1598823 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:36:29.268931 1598823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:36:29.268944 1598823 out.go:358] Setting ErrFile to fd 2...
	I0630 15:36:29.268950 1598823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:36:29.269314 1598823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:36:29.270252 1598823 out.go:352] Setting JSON to false
	I0630 15:36:29.271777 1598823 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33481,"bootTime":1751264308,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:36:29.271898 1598823 start.go:140] virtualization: kvm guest
	I0630 15:36:29.274321 1598823 out.go:177] * [kubernetes-upgrade-691468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:36:29.276233 1598823 notify.go:220] Checking for updates...
	I0630 15:36:29.276242 1598823 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:36:29.277795 1598823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:36:29.279365 1598823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:36:29.280905 1598823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:36:29.283160 1598823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:36:29.285867 1598823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:36:29.287438 1598823 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:36:29.332880 1598823 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 15:36:29.334755 1598823 start.go:304] selected driver: kvm2
	I0630 15:36:29.334780 1598823 start.go:908] validating driver "kvm2" against <nil>
	I0630 15:36:29.334810 1598823 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:36:29.335576 1598823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:36:29.335663 1598823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:36:29.355364 1598823 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:36:29.355429 1598823 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 15:36:29.355752 1598823 start_flags.go:972] Wait components to verify : map[apiserver:true system_pods:true]
	I0630 15:36:29.355787 1598823 cni.go:84] Creating CNI manager for ""
	I0630 15:36:29.355843 1598823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:36:29.355856 1598823 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 15:36:29.355927 1598823 start.go:347] cluster config:
	{Name:kubernetes-upgrade-691468 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-691468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:36:29.356040 1598823 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:36:29.358287 1598823 out.go:177] * Starting "kubernetes-upgrade-691468" primary control-plane node in "kubernetes-upgrade-691468" cluster
	I0630 15:36:29.360333 1598823 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0630 15:36:29.360385 1598823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0630 15:36:29.360393 1598823 cache.go:56] Caching tarball of preloaded images
	I0630 15:36:29.360507 1598823 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:36:29.360519 1598823 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0630 15:36:29.360866 1598823 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/config.json ...
	I0630 15:36:29.360891 1598823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/config.json: {Name:mkb22ad803c366f5730e36047e5409be0dbbf65e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:36:29.361056 1598823 start.go:360] acquireMachinesLock for kubernetes-upgrade-691468: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:36:29.361086 1598823 start.go:364] duration metric: took 15.728µs to acquireMachinesLock for "kubernetes-upgrade-691468"
	I0630 15:36:29.361101 1598823 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-691468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.20.0 ClusterName:kubernetes-upgrade-691468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:36:29.361190 1598823 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 15:36:29.363312 1598823 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0630 15:36:29.363593 1598823 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:36:29.363681 1598823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:36:29.382285 1598823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41037
	I0630 15:36:29.382827 1598823 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:36:29.383411 1598823 main.go:141] libmachine: Using API Version  1
	I0630 15:36:29.383442 1598823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:36:29.383959 1598823 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:36:29.384205 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:36:29.384445 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:36:29.384620 1598823 start.go:159] libmachine.API.Create for "kubernetes-upgrade-691468" (driver="kvm2")
	I0630 15:36:29.384652 1598823 client.go:168] LocalClient.Create starting
	I0630 15:36:29.384694 1598823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 15:36:29.384739 1598823 main.go:141] libmachine: Decoding PEM data...
	I0630 15:36:29.384760 1598823 main.go:141] libmachine: Parsing certificate...
	I0630 15:36:29.384839 1598823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 15:36:29.384867 1598823 main.go:141] libmachine: Decoding PEM data...
	I0630 15:36:29.384888 1598823 main.go:141] libmachine: Parsing certificate...
	I0630 15:36:29.384912 1598823 main.go:141] libmachine: Running pre-create checks...
	I0630 15:36:29.384930 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .PreCreateCheck
	I0630 15:36:29.385287 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetConfigRaw
	I0630 15:36:29.385668 1598823 main.go:141] libmachine: Creating machine...
	I0630 15:36:29.385684 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .Create
	I0630 15:36:29.385819 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) creating KVM machine...
	I0630 15:36:29.385841 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) creating network...
	I0630 15:36:29.387859 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found existing default KVM network
	I0630 15:36:29.389943 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:29.389691 1598879 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0630 15:36:29.390381 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:29.390296 1598879 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001166f0}
	I0630 15:36:29.390483 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | created network xml: 
	I0630 15:36:29.390505 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | <network>
	I0630 15:36:29.390517 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG |   <name>mk-kubernetes-upgrade-691468</name>
	I0630 15:36:29.390536 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG |   <dns enable='no'/>
	I0630 15:36:29.390548 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG |   
	I0630 15:36:29.390557 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0630 15:36:29.390568 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG |     <dhcp>
	I0630 15:36:29.390580 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0630 15:36:29.390592 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG |     </dhcp>
	I0630 15:36:29.390598 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG |   </ip>
	I0630 15:36:29.390609 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG |   
	I0630 15:36:29.390620 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | </network>
	I0630 15:36:29.390651 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | 
	I0630 15:36:29.397878 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | trying to create private KVM network mk-kubernetes-upgrade-691468 192.168.50.0/24...
	I0630 15:36:29.489292 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468 ...
	I0630 15:36:29.489326 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | private KVM network mk-kubernetes-upgrade-691468 192.168.50.0/24 created
	I0630 15:36:29.489342 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 15:36:29.489370 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 15:36:29.489395 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:29.489202 1598879 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:36:29.810704 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:29.810538 1598879 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa...
	I0630 15:36:29.878159 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:29.877971 1598879 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/kubernetes-upgrade-691468.rawdisk...
	I0630 15:36:29.878206 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | Writing magic tar header
	I0630 15:36:29.878217 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | Writing SSH key tar header
	I0630 15:36:29.878227 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:29.878090 1598879 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468 ...
	I0630 15:36:29.878246 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468
	I0630 15:36:29.878263 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 15:36:29.878275 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468 (perms=drwx------)
	I0630 15:36:29.878283 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:36:29.878292 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 15:36:29.878299 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 15:36:29.878306 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | checking permissions on dir: /home/jenkins
	I0630 15:36:29.878312 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | checking permissions on dir: /home
	I0630 15:36:29.878328 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 15:36:29.878342 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | skipping /home - not owner
	I0630 15:36:29.878360 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 15:36:29.878371 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 15:36:29.878378 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 15:36:29.878385 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 15:36:29.878392 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) creating domain...
	I0630 15:36:29.879786 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) define libvirt domain using xml: 
	I0630 15:36:29.879815 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) <domain type='kvm'>
	I0630 15:36:29.879841 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   <name>kubernetes-upgrade-691468</name>
	I0630 15:36:29.879853 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   <memory unit='MiB'>3072</memory>
	I0630 15:36:29.879864 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   <vcpu>2</vcpu>
	I0630 15:36:29.879878 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   <features>
	I0630 15:36:29.879891 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <acpi/>
	I0630 15:36:29.879903 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <apic/>
	I0630 15:36:29.879924 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <pae/>
	I0630 15:36:29.879935 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     
	I0630 15:36:29.879946 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   </features>
	I0630 15:36:29.879957 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   <cpu mode='host-passthrough'>
	I0630 15:36:29.879969 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   
	I0630 15:36:29.879977 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   </cpu>
	I0630 15:36:29.879989 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   <os>
	I0630 15:36:29.880004 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <type>hvm</type>
	I0630 15:36:29.880017 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <boot dev='cdrom'/>
	I0630 15:36:29.880030 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <boot dev='hd'/>
	I0630 15:36:29.880067 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <bootmenu enable='no'/>
	I0630 15:36:29.880086 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   </os>
	I0630 15:36:29.880094 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   <devices>
	I0630 15:36:29.880106 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <disk type='file' device='cdrom'>
	I0630 15:36:29.880132 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/boot2docker.iso'/>
	I0630 15:36:29.880147 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <target dev='hdc' bus='scsi'/>
	I0630 15:36:29.880159 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <readonly/>
	I0630 15:36:29.880164 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     </disk>
	I0630 15:36:29.880170 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <disk type='file' device='disk'>
	I0630 15:36:29.880176 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 15:36:29.880192 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/kubernetes-upgrade-691468.rawdisk'/>
	I0630 15:36:29.880204 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <target dev='hda' bus='virtio'/>
	I0630 15:36:29.880270 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     </disk>
	I0630 15:36:29.880296 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <interface type='network'>
	I0630 15:36:29.880310 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <source network='mk-kubernetes-upgrade-691468'/>
	I0630 15:36:29.880322 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <model type='virtio'/>
	I0630 15:36:29.880331 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     </interface>
	I0630 15:36:29.880341 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <interface type='network'>
	I0630 15:36:29.880361 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <source network='default'/>
	I0630 15:36:29.880373 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <model type='virtio'/>
	I0630 15:36:29.880383 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     </interface>
	I0630 15:36:29.880395 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <serial type='pty'>
	I0630 15:36:29.880406 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <target port='0'/>
	I0630 15:36:29.880418 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     </serial>
	I0630 15:36:29.880427 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <console type='pty'>
	I0630 15:36:29.880448 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <target type='serial' port='0'/>
	I0630 15:36:29.880462 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     </console>
	I0630 15:36:29.880494 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     <rng model='virtio'>
	I0630 15:36:29.880515 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)       <backend model='random'>/dev/random</backend>
	I0630 15:36:29.880524 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     </rng>
	I0630 15:36:29.880531 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     
	I0630 15:36:29.880536 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)     
	I0630 15:36:29.880543 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468)   </devices>
	I0630 15:36:29.880550 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) </domain>
	I0630 15:36:29.880556 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) 
	I0630 15:36:29.885142 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:bb:ad:71 in network default
	I0630 15:36:29.885758 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) starting domain...
	I0630 15:36:29.885778 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) ensuring networks are active...
	I0630 15:36:29.885787 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:29.886511 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Ensuring network default is active
	I0630 15:36:29.886834 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Ensuring network mk-kubernetes-upgrade-691468 is active
	I0630 15:36:29.887394 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) getting domain XML...
	I0630 15:36:29.888047 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) creating domain...
	I0630 15:36:31.298545 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) waiting for IP...
	I0630 15:36:31.299441 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:31.299962 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:31.300010 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:31.299946 1598879 retry.go:31] will retry after 198.136073ms: waiting for domain to come up
	I0630 15:36:31.499536 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:31.500234 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:31.500255 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:31.500169 1598879 retry.go:31] will retry after 234.976678ms: waiting for domain to come up
	I0630 15:36:31.736989 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:31.737514 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:31.737552 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:31.737484 1598879 retry.go:31] will retry after 305.000766ms: waiting for domain to come up
	I0630 15:36:32.044701 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:32.045170 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:32.045288 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:32.045198 1598879 retry.go:31] will retry after 451.992636ms: waiting for domain to come up
	I0630 15:36:32.499000 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:32.499758 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:32.499978 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:32.499796 1598879 retry.go:31] will retry after 475.240286ms: waiting for domain to come up
	I0630 15:36:32.977148 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:32.977738 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:32.977772 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:32.977689 1598879 retry.go:31] will retry after 828.347038ms: waiting for domain to come up
	I0630 15:36:33.807831 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:33.808460 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:33.808485 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:33.808388 1598879 retry.go:31] will retry after 728.000769ms: waiting for domain to come up
	I0630 15:36:34.538498 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:34.539091 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:34.539124 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:34.539039 1598879 retry.go:31] will retry after 1.215818724s: waiting for domain to come up
	I0630 15:36:35.756733 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:35.757223 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:35.757246 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:35.757195 1598879 retry.go:31] will retry after 1.657793724s: waiting for domain to come up
	I0630 15:36:37.417157 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:37.417525 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:37.417586 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:37.417523 1598879 retry.go:31] will retry after 1.620815435s: waiting for domain to come up
	I0630 15:36:39.040551 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:39.041000 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:39.041028 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:39.040964 1598879 retry.go:31] will retry after 2.781453588s: waiting for domain to come up
	I0630 15:36:41.824515 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:41.825006 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:41.825034 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:41.824973 1598879 retry.go:31] will retry after 2.366293968s: waiting for domain to come up
	I0630 15:36:44.192653 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:44.193267 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:44.193293 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:44.193191 1598879 retry.go:31] will retry after 3.05044149s: waiting for domain to come up
	I0630 15:36:47.246842 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:47.247467 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find current IP address of domain kubernetes-upgrade-691468 in network mk-kubernetes-upgrade-691468
	I0630 15:36:47.247537 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | I0630 15:36:47.247451 1598879 retry.go:31] will retry after 4.369683712s: waiting for domain to come up
	I0630 15:36:51.620107 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:51.620629 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) found domain IP: 192.168.50.75
	I0630 15:36:51.620662 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has current primary IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:51.620685 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) reserving static IP address...
	I0630 15:36:51.621197 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-691468", mac: "52:54:00:ee:c2:6f", ip: "192.168.50.75"} in network mk-kubernetes-upgrade-691468
	I0630 15:36:51.716186 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) reserved static IP address 192.168.50.75 for domain kubernetes-upgrade-691468
	I0630 15:36:51.716224 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | Getting to WaitForSSH function...
	I0630 15:36:51.716232 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) waiting for SSH...
	I0630 15:36:51.720711 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:51.721126 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:51.721158 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:51.721350 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | Using SSH client type: external
	I0630 15:36:51.721387 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa (-rw-------)
	I0630 15:36:51.721448 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:36:51.721469 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | About to run SSH command:
	I0630 15:36:51.721492 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | exit 0
	I0630 15:36:51.861924 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | SSH cmd err, output: <nil>: 
	I0630 15:36:51.862200 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) KVM machine creation complete
	I0630 15:36:51.862479 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetConfigRaw
	I0630 15:36:51.863176 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:36:51.863447 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:36:51.863645 1598823 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:36:51.863663 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetState
	I0630 15:36:51.865383 1598823 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:36:51.865398 1598823 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:36:51.865424 1598823 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:36:51.865456 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:51.868914 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:51.869456 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:51.869491 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:51.869662 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:51.869853 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:51.870019 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:51.870230 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:51.870419 1598823 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:51.870671 1598823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:36:51.870685 1598823 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:36:51.984911 1598823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:36:51.984943 1598823 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:36:51.984953 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:51.987978 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:51.988355 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:51.988382 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:51.988546 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:51.988759 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:51.988954 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:51.989135 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:51.989310 1598823 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:51.989616 1598823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:36:51.989634 1598823 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:36:52.103879 1598823 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:36:52.104069 1598823 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:36:52.104108 1598823 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:36:52.104123 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:36:52.104456 1598823 buildroot.go:166] provisioning hostname "kubernetes-upgrade-691468"
	I0630 15:36:52.104479 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:36:52.104643 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:52.107879 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.108354 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:52.108380 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.108757 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:52.108977 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:52.109438 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:52.109653 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:52.109865 1598823 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:52.110135 1598823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:36:52.110150 1598823 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-691468 && echo "kubernetes-upgrade-691468" | sudo tee /etc/hostname
	I0630 15:36:52.238297 1598823 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-691468
	
	I0630 15:36:52.238330 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:52.241422 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.241760 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:52.241787 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.241993 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:52.242264 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:52.242488 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:52.242671 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:52.242888 1598823 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:52.243108 1598823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:36:52.243127 1598823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-691468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-691468/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-691468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:36:52.363606 1598823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:36:52.363644 1598823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:36:52.363685 1598823 buildroot.go:174] setting up certificates
	I0630 15:36:52.363705 1598823 provision.go:84] configureAuth start
	I0630 15:36:52.363720 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:36:52.364040 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetIP
	I0630 15:36:52.367495 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.367859 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:52.367894 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.368028 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:52.370421 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.370881 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:52.370912 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.371004 1598823 provision.go:143] copyHostCerts
	I0630 15:36:52.371061 1598823 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:36:52.371086 1598823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:36:52.371167 1598823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:36:52.371291 1598823 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:36:52.371301 1598823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:36:52.371339 1598823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:36:52.371434 1598823 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:36:52.371445 1598823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:36:52.371479 1598823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:36:52.371585 1598823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-691468 san=[127.0.0.1 192.168.50.75 kubernetes-upgrade-691468 localhost minikube]
	I0630 15:36:52.557174 1598823 provision.go:177] copyRemoteCerts
	I0630 15:36:52.557276 1598823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:36:52.557312 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:52.560677 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.560998 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:52.561020 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.561294 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:52.561537 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:52.561724 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:52.561878 1598823 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:36:52.650781 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:36:52.683538 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0630 15:36:52.715700 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:36:52.747390 1598823 provision.go:87] duration metric: took 383.661081ms to configureAuth
	I0630 15:36:52.747431 1598823 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:36:52.747658 1598823 config.go:182] Loaded profile config "kubernetes-upgrade-691468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0630 15:36:52.747754 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:52.750897 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.751359 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:52.751383 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:52.751630 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:52.751872 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:52.752069 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:52.752268 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:52.752443 1598823 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:52.752726 1598823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:36:52.752744 1598823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:36:53.008256 1598823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:36:53.008286 1598823 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:36:53.008295 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetURL
	I0630 15:36:53.010323 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | using libvirt version 6000000
	I0630 15:36:53.013194 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.013956 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:53.013991 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.014342 1598823 main.go:141] libmachine: Docker is up and running!
	I0630 15:36:53.014357 1598823 main.go:141] libmachine: Reticulating splines...
	I0630 15:36:53.014365 1598823 client.go:171] duration metric: took 23.629705193s to LocalClient.Create
	I0630 15:36:53.014397 1598823 start.go:167] duration metric: took 23.629779188s to libmachine.API.Create "kubernetes-upgrade-691468"
	I0630 15:36:53.014415 1598823 start.go:293] postStartSetup for "kubernetes-upgrade-691468" (driver="kvm2")
	I0630 15:36:53.014429 1598823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:36:53.014462 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:36:53.014718 1598823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:36:53.014748 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:53.017798 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.018277 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:53.018299 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.018525 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:53.018765 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:53.018968 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:53.019169 1598823 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:36:53.106458 1598823 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:36:53.111568 1598823 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:36:53.111601 1598823 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:36:53.111667 1598823 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:36:53.111737 1598823 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:36:53.111846 1598823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:36:53.125344 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:36:53.161781 1598823 start.go:296] duration metric: took 147.328311ms for postStartSetup
	I0630 15:36:53.161853 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetConfigRaw
	I0630 15:36:53.162534 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetIP
	I0630 15:36:53.165382 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.165737 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:53.165768 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.166013 1598823 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/config.json ...
	I0630 15:36:53.166233 1598823 start.go:128] duration metric: took 23.805031578s to createHost
	I0630 15:36:53.166259 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:53.168825 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.169321 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:53.169388 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.169429 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:53.169623 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:53.169963 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:53.170285 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:53.170590 1598823 main.go:141] libmachine: Using SSH client type: native
	I0630 15:36:53.170817 1598823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:36:53.170829 1598823 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:36:53.283697 1598823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751297813.256559313
	
	I0630 15:36:53.283726 1598823 fix.go:216] guest clock: 1751297813.256559313
	I0630 15:36:53.283736 1598823 fix.go:229] Guest: 2025-06-30 15:36:53.256559313 +0000 UTC Remote: 2025-06-30 15:36:53.166247 +0000 UTC m=+23.948039385 (delta=90.312313ms)
	I0630 15:36:53.283779 1598823 fix.go:200] guest clock delta is within tolerance: 90.312313ms
	I0630 15:36:53.283790 1598823 start.go:83] releasing machines lock for "kubernetes-upgrade-691468", held for 23.922695725s
	I0630 15:36:53.283822 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:36:53.284158 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetIP
	I0630 15:36:53.287552 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.287976 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:53.288021 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.288213 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:36:53.288860 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:36:53.289063 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:36:53.289163 1598823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:36:53.289208 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:53.289321 1598823 ssh_runner.go:195] Run: cat /version.json
	I0630 15:36:53.289348 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:36:53.292064 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.292813 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.293561 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:53.293597 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.293870 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:53.294097 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:53.294163 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:53.294207 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:53.294296 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:53.294392 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:36:53.294489 1598823 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:36:53.294579 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:36:53.294727 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:36:53.294844 1598823 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:36:53.383068 1598823 ssh_runner.go:195] Run: systemctl --version
	I0630 15:36:53.424960 1598823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:36:53.587254 1598823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:36:53.594619 1598823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:36:53.594719 1598823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:36:53.614450 1598823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:36:53.614483 1598823 start.go:495] detecting cgroup driver to use...
	I0630 15:36:53.614558 1598823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:36:53.633184 1598823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:36:53.650685 1598823 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:36:53.650745 1598823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:36:53.667896 1598823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:36:53.685157 1598823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:36:53.862791 1598823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:36:54.036696 1598823 docker.go:246] disabling docker service ...
	I0630 15:36:54.036774 1598823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:36:54.057653 1598823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:36:54.074222 1598823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:36:54.284626 1598823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:36:54.439900 1598823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:36:54.456481 1598823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:36:54.477643 1598823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0630 15:36:54.477727 1598823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:54.489451 1598823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:36:54.489537 1598823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:54.502182 1598823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:54.513820 1598823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:36:54.525591 1598823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:36:54.538452 1598823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:36:54.549483 1598823 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:36:54.549564 1598823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:36:54.566240 1598823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:36:54.577671 1598823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:36:54.726005 1598823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:36:54.831803 1598823 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:36:54.831892 1598823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:36:54.836991 1598823 start.go:563] Will wait 60s for crictl version
	I0630 15:36:54.837080 1598823 ssh_runner.go:195] Run: which crictl
	I0630 15:36:54.841382 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:36:54.883040 1598823 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:36:54.883127 1598823 ssh_runner.go:195] Run: crio --version
	I0630 15:36:54.911657 1598823 ssh_runner.go:195] Run: crio --version
	I0630 15:36:54.942868 1598823 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0630 15:36:54.944418 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetIP
	I0630 15:36:54.949813 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:54.950581 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:36:44 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:36:54.950626 1598823 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:36:54.950874 1598823 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0630 15:36:54.955528 1598823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:36:54.970278 1598823 kubeadm.go:875] updating cluster {Name:kubernetes-upgrade-691468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterNam
e:kubernetes-upgrade-691468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:36:54.970399 1598823 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0630 15:36:54.970444 1598823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:36:55.014133 1598823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0630 15:36:55.014241 1598823 ssh_runner.go:195] Run: which lz4
	I0630 15:36:55.019987 1598823 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:36:55.026134 1598823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:36:55.026176 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0630 15:36:56.831426 1598823 crio.go:462] duration metric: took 1.81146809s to copy over tarball
	I0630 15:36:56.831531 1598823 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:36:59.381707 1598823 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.550136921s)
	I0630 15:36:59.381756 1598823 crio.go:469] duration metric: took 2.550286709s to extract the tarball
	I0630 15:36:59.381793 1598823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:36:59.425663 1598823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:36:59.467270 1598823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0630 15:36:59.467300 1598823 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0630 15:36:59.467382 1598823 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:36:59.467494 1598823 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0630 15:36:59.467537 1598823 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0630 15:36:59.467425 1598823 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:36:59.467434 1598823 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:36:59.467595 1598823 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:36:59.467444 1598823 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:36:59.467456 1598823 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0630 15:36:59.469123 1598823 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0630 15:36:59.469186 1598823 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:36:59.469212 1598823 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:36:59.469266 1598823 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0630 15:36:59.469287 1598823 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0630 15:36:59.469137 1598823 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:36:59.469131 1598823 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:36:59.469143 1598823 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:36:59.687141 1598823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0630 15:36:59.689936 1598823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0630 15:36:59.721343 1598823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:36:59.735402 1598823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:36:59.736160 1598823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:36:59.736693 1598823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0630 15:36:59.741628 1598823 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0630 15:36:59.741680 1598823 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0630 15:36:59.741778 1598823 ssh_runner.go:195] Run: which crictl
	I0630 15:36:59.751486 1598823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:36:59.791628 1598823 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0630 15:36:59.791697 1598823 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0630 15:36:59.791757 1598823 ssh_runner.go:195] Run: which crictl
	I0630 15:36:59.857376 1598823 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0630 15:36:59.857460 1598823 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:36:59.857380 1598823 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0630 15:36:59.857519 1598823 ssh_runner.go:195] Run: which crictl
	I0630 15:36:59.857550 1598823 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:36:59.857601 1598823 ssh_runner.go:195] Run: which crictl
	I0630 15:36:59.857394 1598823 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0630 15:36:59.857712 1598823 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:36:59.857778 1598823 ssh_runner.go:195] Run: which crictl
	I0630 15:36:59.864187 1598823 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0630 15:36:59.864226 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0630 15:36:59.864232 1598823 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0630 15:36:59.864259 1598823 ssh_runner.go:195] Run: which crictl
	I0630 15:36:59.880051 1598823 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0630 15:36:59.880110 1598823 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:36:59.880133 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0630 15:36:59.880150 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:36:59.880171 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:36:59.880177 1598823 ssh_runner.go:195] Run: which crictl
	I0630 15:36:59.880239 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:36:59.956468 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0630 15:36:59.956557 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0630 15:36:59.956584 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0630 15:36:59.998962 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:37:00.013666 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:37:00.013666 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:37:00.013831 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:37:00.109240 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0630 15:37:00.109386 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0630 15:37:00.117432 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0630 15:37:00.168437 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:37:00.209321 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:37:00.209428 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:37:00.209455 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:37:00.251097 1598823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0630 15:37:00.251196 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0630 15:37:00.263921 1598823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0630 15:37:00.317622 1598823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0630 15:37:00.322020 1598823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0630 15:37:00.347420 1598823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0630 15:37:00.356210 1598823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:37:00.356246 1598823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0630 15:37:00.391313 1598823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0630 15:37:00.829902 1598823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:37:00.978141 1598823 cache_images.go:92] duration metric: took 1.510818429s to LoadCachedImages
	W0630 15:37:00.978267 1598823 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0630 15:37:00.978287 1598823 kubeadm.go:926] updating node { 192.168.50.75 8443 v1.20.0 crio true true} ...
	I0630 15:37:00.978445 1598823 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-691468 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-691468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 15:37:00.978547 1598823 ssh_runner.go:195] Run: crio config
	I0630 15:37:01.026940 1598823 cni.go:84] Creating CNI manager for ""
	I0630 15:37:01.026972 1598823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:37:01.026987 1598823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:37:01.027007 1598823 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.75 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-691468 NodeName:kubernetes-upgrade-691468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0630 15:37:01.027178 1598823 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.75
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-691468"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.75
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.75"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:37:01.027275 1598823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0630 15:37:01.038890 1598823 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:37:01.038996 1598823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:37:01.050651 1598823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0630 15:37:01.073936 1598823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:37:01.095542 1598823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0630 15:37:01.116383 1598823 ssh_runner.go:195] Run: grep 192.168.50.75	control-plane.minikube.internal$ /etc/hosts
	I0630 15:37:01.120417 1598823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:37:01.135080 1598823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:37:01.277359 1598823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:37:01.308395 1598823 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468 for IP: 192.168.50.75
	I0630 15:37:01.308425 1598823 certs.go:194] generating shared ca certs ...
	I0630 15:37:01.308448 1598823 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:37:01.308676 1598823 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:37:01.308728 1598823 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:37:01.308740 1598823 certs.go:256] generating profile certs ...
	I0630 15:37:01.308816 1598823 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/client.key
	I0630 15:37:01.308836 1598823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/client.crt with IP's: []
	I0630 15:37:01.629089 1598823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/client.crt ...
	I0630 15:37:01.629128 1598823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/client.crt: {Name:mk638cc691814e8dca2fda130b3a8407bffa032a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:37:01.629300 1598823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/client.key ...
	I0630 15:37:01.629317 1598823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/client.key: {Name:mk06a030b894b75f47a598a77fe43550a8e7cca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:37:01.629395 1598823 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.key.1f1a34b6
	I0630 15:37:01.629431 1598823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.crt.1f1a34b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.75]
	I0630 15:37:02.000669 1598823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.crt.1f1a34b6 ...
	I0630 15:37:02.000706 1598823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.crt.1f1a34b6: {Name:mk4a1ffea28cca70c54c91efb53d5b9fff446153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:37:02.000882 1598823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.key.1f1a34b6 ...
	I0630 15:37:02.000895 1598823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.key.1f1a34b6: {Name:mke41687e96aee60a3627a34bb3b453c3c504815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:37:02.000986 1598823 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.crt.1f1a34b6 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.crt
	I0630 15:37:02.001085 1598823 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.key.1f1a34b6 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.key
	I0630 15:37:02.001157 1598823 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/proxy-client.key
	I0630 15:37:02.001179 1598823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/proxy-client.crt with IP's: []
	I0630 15:37:02.626836 1598823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/proxy-client.crt ...
	I0630 15:37:02.626878 1598823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/proxy-client.crt: {Name:mk17f47a485e62052b8e4e2c37bf0e99729fbdd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:37:02.627044 1598823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/proxy-client.key ...
	I0630 15:37:02.627187 1598823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/proxy-client.key: {Name:mkdb87c6d89c20de500ac5a79e45ad2a89e34fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:37:02.627527 1598823 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:37:02.627585 1598823 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:37:02.627599 1598823 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:37:02.627619 1598823 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:37:02.627640 1598823 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:37:02.627666 1598823 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:37:02.627710 1598823 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:37:02.628365 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:37:02.659263 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:37:02.690204 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:37:02.719415 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:37:02.752511 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0630 15:37:02.782594 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 15:37:02.816674 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:37:02.861780 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0630 15:37:02.898243 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:37:02.941191 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:37:02.972599 1598823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:37:03.001958 1598823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:37:03.022388 1598823 ssh_runner.go:195] Run: openssl version
	I0630 15:37:03.029274 1598823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:37:03.042286 1598823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:37:03.047492 1598823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:37:03.047574 1598823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:37:03.055484 1598823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:37:03.070147 1598823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:37:03.084104 1598823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:37:03.089170 1598823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:37:03.089251 1598823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:37:03.097204 1598823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:37:03.110374 1598823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:37:03.125829 1598823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:37:03.131606 1598823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:37:03.131686 1598823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:37:03.139188 1598823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:37:03.153217 1598823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:37:03.158280 1598823 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:37:03.158359 1598823 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-691468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:k
ubernetes-upgrade-691468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:37:03.158446 1598823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:37:03.158501 1598823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:37:03.205687 1598823 cri.go:89] found id: ""
	I0630 15:37:03.205764 1598823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:37:03.218859 1598823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:37:03.231041 1598823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:37:03.244596 1598823 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:37:03.244628 1598823 kubeadm.go:157] found existing configuration files:
	
	I0630 15:37:03.244705 1598823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:37:03.257493 1598823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:37:03.257566 1598823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:37:03.268836 1598823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:37:03.279408 1598823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:37:03.279481 1598823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:37:03.290509 1598823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:37:03.302106 1598823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:37:03.302193 1598823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:37:03.313798 1598823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:37:03.324855 1598823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:37:03.324929 1598823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:37:03.336785 1598823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:37:03.561162 1598823 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:39:01.873888 1598823 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:39:01.874128 1598823 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0630 15:39:01.876049 1598823 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:39:01.876150 1598823 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:39:01.876308 1598823 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:39:01.876678 1598823 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:39:01.876967 1598823 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:39:01.877603 1598823 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:39:01.879496 1598823 out.go:235]   - Generating certificates and keys ...
	I0630 15:39:01.879613 1598823 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:39:01.879873 1598823 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:39:01.880004 1598823 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 15:39:01.880095 1598823 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 15:39:01.880145 1598823 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 15:39:01.880187 1598823 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 15:39:01.880253 1598823 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 15:39:01.880394 1598823 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-691468 localhost] and IPs [192.168.50.75 127.0.0.1 ::1]
	I0630 15:39:01.880470 1598823 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 15:39:01.880599 1598823 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-691468 localhost] and IPs [192.168.50.75 127.0.0.1 ::1]
	I0630 15:39:01.880685 1598823 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 15:39:01.880774 1598823 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 15:39:01.880838 1598823 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 15:39:01.880927 1598823 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:39:01.881010 1598823 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:39:01.881099 1598823 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:39:01.881192 1598823 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:39:01.881271 1598823 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:39:01.881477 1598823 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:39:01.881597 1598823 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:39:01.881660 1598823 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:39:01.881760 1598823 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:39:01.882960 1598823 out.go:235]   - Booting up control plane ...
	I0630 15:39:01.883089 1598823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:39:01.883195 1598823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:39:01.883267 1598823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:39:01.883338 1598823 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:39:01.883480 1598823 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:39:01.883554 1598823 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:39:01.883626 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:39:01.883804 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:39:01.883900 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:39:01.884080 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:39:01.884183 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:39:01.884386 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:39:01.884466 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:39:01.884633 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:39:01.884699 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:39:01.884865 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:39:01.884873 1598823 kubeadm.go:310] 
	I0630 15:39:01.884909 1598823 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:39:01.884942 1598823 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:39:01.884948 1598823 kubeadm.go:310] 
	I0630 15:39:01.884977 1598823 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:39:01.885008 1598823 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:39:01.885099 1598823 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:39:01.885111 1598823 kubeadm.go:310] 
	I0630 15:39:01.885200 1598823 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:39:01.885233 1598823 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:39:01.885261 1598823 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:39:01.885267 1598823 kubeadm.go:310] 
	I0630 15:39:01.885415 1598823 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:39:01.885550 1598823 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:39:01.885569 1598823 kubeadm.go:310] 
	I0630 15:39:01.885684 1598823 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:39:01.885780 1598823 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:39:01.885854 1598823 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:39:01.885915 1598823 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:39:01.885938 1598823 kubeadm.go:310] 
	W0630 15:39:01.886091 1598823 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-691468 localhost] and IPs [192.168.50.75 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-691468 localhost] and IPs [192.168.50.75 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-691468 localhost] and IPs [192.168.50.75 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-691468 localhost] and IPs [192.168.50.75 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0630 15:39:01.886134 1598823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:39:04.286416 1598823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.40025762s)
	I0630 15:39:04.286496 1598823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:39:04.306313 1598823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:39:04.321286 1598823 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:39:04.321313 1598823 kubeadm.go:157] found existing configuration files:
	
	I0630 15:39:04.321377 1598823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:39:04.334189 1598823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:39:04.334282 1598823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:39:04.349736 1598823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:39:04.360956 1598823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:39:04.361035 1598823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:39:04.375631 1598823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:39:04.388637 1598823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:39:04.388721 1598823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:39:04.403108 1598823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:39:04.417977 1598823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:39:04.418057 1598823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:39:04.433988 1598823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:39:04.511979 1598823 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:39:04.512075 1598823 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:39:04.690702 1598823 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:39:04.690884 1598823 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:39:04.691033 1598823 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:39:04.921612 1598823 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:39:04.924537 1598823 out.go:235]   - Generating certificates and keys ...
	I0630 15:39:04.924666 1598823 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:39:04.924745 1598823 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:39:04.924848 1598823 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:39:04.924945 1598823 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:39:04.925004 1598823 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:39:04.926425 1598823 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:39:04.926528 1598823 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:39:04.926617 1598823 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:39:04.927876 1598823 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:39:04.928137 1598823 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:39:04.928650 1598823 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:39:04.928714 1598823 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:39:04.979978 1598823 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:39:05.199758 1598823 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:39:05.345920 1598823 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:39:05.587151 1598823 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:39:05.605639 1598823 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:39:05.606735 1598823 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:39:05.606815 1598823 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:39:05.804810 1598823 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:39:05.806712 1598823 out.go:235]   - Booting up control plane ...
	I0630 15:39:05.806854 1598823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:39:05.822725 1598823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:39:05.823959 1598823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:39:05.825334 1598823 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:39:05.827414 1598823 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:39:45.829712 1598823 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:39:45.830205 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:39:45.830746 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:39:50.831084 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:39:50.831401 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:40:00.831916 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:40:00.832176 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:40:20.831751 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:40:20.831989 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:41:00.832667 1598823 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:41:00.833004 1598823 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:41:00.833029 1598823 kubeadm.go:310] 
	I0630 15:41:00.833125 1598823 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:41:00.833201 1598823 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:41:00.833226 1598823 kubeadm.go:310] 
	I0630 15:41:00.833287 1598823 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:41:00.833339 1598823 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:41:00.833514 1598823 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:41:00.833524 1598823 kubeadm.go:310] 
	I0630 15:41:00.833643 1598823 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:41:00.833669 1598823 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:41:00.833694 1598823 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:41:00.833698 1598823 kubeadm.go:310] 
	I0630 15:41:00.833781 1598823 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:41:00.833847 1598823 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:41:00.833851 1598823 kubeadm.go:310] 
	I0630 15:41:00.833981 1598823 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:41:00.834103 1598823 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:41:00.834294 1598823 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:41:00.834476 1598823 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:41:00.834505 1598823 kubeadm.go:310] 
	I0630 15:41:00.835834 1598823 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:41:00.835952 1598823 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:41:00.836041 1598823 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0630 15:41:00.836126 1598823 kubeadm.go:394] duration metric: took 3m57.677772677s to StartCluster
	I0630 15:41:00.836186 1598823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:41:00.836264 1598823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:41:00.890123 1598823 cri.go:89] found id: ""
	I0630 15:41:00.890161 1598823 logs.go:282] 0 containers: []
	W0630 15:41:00.890175 1598823 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:41:00.890186 1598823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:41:00.890266 1598823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:41:00.947769 1598823 cri.go:89] found id: ""
	I0630 15:41:00.947806 1598823 logs.go:282] 0 containers: []
	W0630 15:41:00.947819 1598823 logs.go:284] No container was found matching "etcd"
	I0630 15:41:00.947828 1598823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:41:00.947904 1598823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:41:01.008995 1598823 cri.go:89] found id: ""
	I0630 15:41:01.009035 1598823 logs.go:282] 0 containers: []
	W0630 15:41:01.009055 1598823 logs.go:284] No container was found matching "coredns"
	I0630 15:41:01.009064 1598823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:41:01.009142 1598823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:41:01.058434 1598823 cri.go:89] found id: ""
	I0630 15:41:01.058470 1598823 logs.go:282] 0 containers: []
	W0630 15:41:01.058478 1598823 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:41:01.058485 1598823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:41:01.058557 1598823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:41:01.108609 1598823 cri.go:89] found id: ""
	I0630 15:41:01.108655 1598823 logs.go:282] 0 containers: []
	W0630 15:41:01.108669 1598823 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:41:01.108679 1598823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:41:01.108756 1598823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:41:01.157529 1598823 cri.go:89] found id: ""
	I0630 15:41:01.157566 1598823 logs.go:282] 0 containers: []
	W0630 15:41:01.157574 1598823 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:41:01.157582 1598823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:41:01.157659 1598823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:41:01.215723 1598823 cri.go:89] found id: ""
	I0630 15:41:01.215762 1598823 logs.go:282] 0 containers: []
	W0630 15:41:01.215776 1598823 logs.go:284] No container was found matching "kindnet"
	I0630 15:41:01.215793 1598823 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:41:01.215811 1598823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:41:01.304580 1598823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:41:01.304603 1598823 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:41:01.304620 1598823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:41:01.426238 1598823 logs.go:123] Gathering logs for container status ...
	I0630 15:41:01.426348 1598823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:41:01.488943 1598823 logs.go:123] Gathering logs for kubelet ...
	I0630 15:41:01.488979 1598823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:41:01.552429 1598823 logs.go:123] Gathering logs for dmesg ...
	I0630 15:41:01.552475 1598823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0630 15:41:01.572623 1598823 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0630 15:41:01.572691 1598823 out.go:270] * 
	* 
	W0630 15:41:01.572772 1598823 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:41:01.572801 1598823 out.go:270] * 
	* 
	W0630 15:41:01.574538 1598823 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0630 15:41:01.577734 1598823 out.go:201] 
	W0630 15:41:01.579176 1598823 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:41:01.579231 1598823 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0630 15:41:01.579284 1598823 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0630 15:41:01.580890 1598823 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-691468 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-691468
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-691468: (1.681037246s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-691468 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-691468 status --format={{.Host}}: exit status 7 (88.740461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-691468 --memory=3072 --kubernetes-version=v1.33.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0630 15:41:04.684613 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-691468 --memory=3072 --kubernetes-version=v1.33.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.191426615s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-691468 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-691468 --memory=3072 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-691468 --memory=3072 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (107.736514ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-691468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.33.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-691468
	    minikube start -p kubernetes-upgrade-691468 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6914682 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.33.2, by running:
	    
	    minikube start -p kubernetes-upgrade-691468 --kubernetes-version=v1.33.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-691468 --memory=3072 --kubernetes-version=v1.33.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-691468 --memory=3072 --kubernetes-version=v1.33.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.388933438s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-06-30 15:43:27.185774077 +0000 UTC m=+5150.273654895
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-691468 -n kubernetes-upgrade-691468
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-691468 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-691468 logs -n 25: (1.666653593s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-668101 sudo cat              | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo cat              | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                  | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                  | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                  | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo find             | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo crio             | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-668101                       | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:40 UTC |
	| start   | -p cert-expiration-775975              | cert-expiration-775975    | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:41 UTC |
	|         | --memory=3072                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-185417            | force-systemd-env-185417  | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:40 UTC |
	| start   | -p force-systemd-flag-632862           | force-systemd-flag-632862 | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:42 UTC |
	|         | --memory=3072 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-691468           | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:41 UTC | 30 Jun 25 15:41 UTC |
	| start   | -p kubernetes-upgrade-691468           | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:41 UTC | 30 Jun 25 15:42 UTC |
	|         | --memory=3072                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-011818                        | pause-011818              | jenkins | v1.36.0 | 30 Jun 25 15:41 UTC | 30 Jun 25 15:42 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-632862 ssh cat      | force-systemd-flag-632862 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC | 30 Jun 25 15:42 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-632862           | force-systemd-flag-632862 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC | 30 Jun 25 15:42 UTC |
	| start   | -p cert-options-329017                 | cert-options-329017       | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC | 30 Jun 25 15:43 UTC |
	|         | --memory=3072                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-691468           | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC |                     |
	|         | --memory=3072                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-691468           | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC | 30 Jun 25 15:43 UTC |
	|         | --memory=3072                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p pause-011818                        | pause-011818              | jenkins | v1.36.0 | 30 Jun 25 15:43 UTC | 30 Jun 25 15:43 UTC |
	| start   | -p old-k8s-version-836310              | old-k8s-version-836310    | jenkins | v1.36.0 | 30 Jun 25 15:43 UTC |                     |
	|         | --memory=3072                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| ssh     | cert-options-329017 ssh                | cert-options-329017       | jenkins | v1.36.0 | 30 Jun 25 15:43 UTC | 30 Jun 25 15:43 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-329017 -- sudo         | cert-options-329017       | jenkins | v1.36.0 | 30 Jun 25 15:43 UTC | 30 Jun 25 15:43 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-329017                 | cert-options-329017       | jenkins | v1.36.0 | 30 Jun 25 15:43 UTC | 30 Jun 25 15:43 UTC |
	| start   | -p no-preload-733305                   | no-preload-733305         | jenkins | v1.36.0 | 30 Jun 25 15:43 UTC |                     |
	|         | --memory=3072                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 15:43:14
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 15:43:14.827691 1607097 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:43:14.827973 1607097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:43:14.827983 1607097 out.go:358] Setting ErrFile to fd 2...
	I0630 15:43:14.827989 1607097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:43:14.828225 1607097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:43:14.828860 1607097 out.go:352] Setting JSON to false
	I0630 15:43:14.829984 1607097 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33887,"bootTime":1751264308,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:43:14.830099 1607097 start.go:140] virtualization: kvm guest
	I0630 15:43:14.832304 1607097 out.go:177] * [no-preload-733305] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:43:14.833623 1607097 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:43:14.833673 1607097 notify.go:220] Checking for updates...
	I0630 15:43:14.836385 1607097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:43:14.837787 1607097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:43:14.838878 1607097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:43:14.840108 1607097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:43:14.841485 1607097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:43:14.843705 1607097 config.go:182] Loaded profile config "cert-expiration-775975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:43:14.843855 1607097 config.go:182] Loaded profile config "kubernetes-upgrade-691468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:43:14.844003 1607097 config.go:182] Loaded profile config "old-k8s-version-836310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0630 15:43:14.844136 1607097 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:43:14.885078 1607097 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 15:43:14.886341 1607097 start.go:304] selected driver: kvm2
	I0630 15:43:14.886375 1607097 start.go:908] validating driver "kvm2" against <nil>
	I0630 15:43:14.886387 1607097 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:43:14.887182 1607097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.887258 1607097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:43:14.904175 1607097 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:43:14.904243 1607097 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 15:43:14.904572 1607097 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:43:14.904615 1607097 cni.go:84] Creating CNI manager for ""
	I0630 15:43:14.904675 1607097 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:43:14.904688 1607097 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 15:43:14.904759 1607097 start.go:347] cluster config:
	{Name:no-preload-733305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:no-preload-733305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0630 15:43:14.904885 1607097 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.906842 1607097 out.go:177] * Starting "no-preload-733305" primary control-plane node in "no-preload-733305" cluster
	I0630 15:43:11.374818 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:11.375406 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:11.375433 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:11.375381 1606791 retry.go:31] will retry after 1.062262318s: waiting for domain to come up
	I0630 15:43:12.439890 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:12.440837 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:12.440868 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:12.440809 1606791 retry.go:31] will retry after 934.442012ms: waiting for domain to come up
	I0630 15:43:13.377218 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:13.377833 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:13.377926 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:13.377841 1606791 retry.go:31] will retry after 1.383485531s: waiting for domain to come up
	I0630 15:43:14.762724 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:14.763428 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:14.763472 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:14.763400 1606791 retry.go:31] will retry after 2.063726231s: waiting for domain to come up
	I0630 15:43:14.908166 1607097 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:43:14.908371 1607097 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/config.json ...
	I0630 15:43:14.908415 1607097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/config.json: {Name:mkeaeb154c6070a2fdac12c470bc7a518d4b217e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:14.908453 1607097 cache.go:107] acquiring lock: {Name:mk3ed0da0b1edc75b2c4953a35b02df0f747abda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.908458 1607097 cache.go:107] acquiring lock: {Name:mk69d129546d3cb091e9db1c968f297beb8e63b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.908474 1607097 cache.go:107] acquiring lock: {Name:mk2aeaefa7449647f66a212a00ce3d1f9f37a628 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.908505 1607097 cache.go:107] acquiring lock: {Name:mk451714c711dd9a3cee3cfff2c8b27082fec960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.908571 1607097 cache.go:115] /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0630 15:43:14.908595 1607097 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 149.488µs
	I0630 15:43:14.908618 1607097 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0630 15:43:14.908586 1607097 cache.go:107] acquiring lock: {Name:mkac84c460143d02e81f70f36bde4096dbfb6ae5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.908644 1607097 start.go:360] acquireMachinesLock for no-preload-733305: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:43:14.908642 1607097 cache.go:107] acquiring lock: {Name:mk28ad7418de4f7f89b7c032a9b57c7ed39a64e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.908696 1607097 cache.go:107] acquiring lock: {Name:mkcd10a8500b2cf52c44e7e11ef16ebaf98d0ea1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.908630 1607097 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.33.2
	I0630 15:43:14.908777 1607097 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.33.2
	I0630 15:43:14.908620 1607097 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.33.2
	I0630 15:43:14.908816 1607097 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.33.2
	I0630 15:43:14.908829 1607097 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.21-0
	I0630 15:43:14.908900 1607097 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.12.0
	I0630 15:43:14.909175 1607097 cache.go:107] acquiring lock: {Name:mk5eb08cc4a63e45095b194325d32ba812fb53de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:14.909358 1607097 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0630 15:43:14.910274 1607097 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.21-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.21-0
	I0630 15:43:14.910310 1607097 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.33.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.33.2
	I0630 15:43:14.910369 1607097 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.33.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.33.2
	I0630 15:43:14.910400 1607097 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.33.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.33.2
	I0630 15:43:14.910434 1607097 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.33.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.33.2
	I0630 15:43:14.910529 1607097 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.0: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.0
	I0630 15:43:14.910555 1607097 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0630 15:43:15.110833 1607097 cache.go:162] opening:  /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0630 15:43:15.132470 1607097 cache.go:162] opening:  /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0
	I0630 15:43:15.134154 1607097 cache.go:162] opening:  /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.33.2
	I0630 15:43:15.177633 1607097 cache.go:162] opening:  /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.33.2
	I0630 15:43:15.187142 1607097 cache.go:162] opening:  /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.33.2
	I0630 15:43:15.200699 1607097 cache.go:162] opening:  /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.33.2
	I0630 15:43:15.201442 1607097 cache.go:162] opening:  /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.0
	I0630 15:43:15.221493 1607097 cache.go:157] /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0630 15:43:15.221527 1607097 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 312.401147ms
	I0630 15:43:15.221544 1607097 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0630 15:43:15.763032 1607097 cache.go:157] /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.33.2 exists
	I0630 15:43:15.763068 1607097 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.33.2" -> "/home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.33.2" took 854.616966ms
	I0630 15:43:15.763080 1607097 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.33.2 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.33.2 succeeded
	I0630 15:43:17.098298 1607097 cache.go:157] /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.33.2 exists
	I0630 15:43:17.098333 1607097 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.33.2" -> "/home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.33.2" took 2.189829211s
	I0630 15:43:17.098351 1607097 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.33.2 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.33.2 succeeded
	I0630 15:43:17.110915 1607097 cache.go:157] /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.33.2 exists
	I0630 15:43:17.110960 1607097 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.33.2" -> "/home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.33.2" took 2.202319025s
	I0630 15:43:17.110980 1607097 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.33.2 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.33.2 succeeded
	I0630 15:43:17.145764 1607097 cache.go:157] /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.0 exists
	I0630 15:43:17.145803 1607097 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.0" -> "/home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.0" took 2.237170052s
	I0630 15:43:17.145821 1607097 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.0 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.0 succeeded
	I0630 15:43:17.167278 1607097 cache.go:157] /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.33.2 exists
	I0630 15:43:17.167315 1607097 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.33.2" -> "/home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.33.2" took 2.25886777s
	I0630 15:43:17.167331 1607097 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.33.2 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.33.2 succeeded
	I0630 15:43:17.447197 1607097 cache.go:157] /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0 exists
	I0630 15:43:17.447239 1607097 cache.go:96] cache image "registry.k8s.io/etcd:3.5.21-0" -> "/home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0" took 2.538725834s
	I0630 15:43:17.447277 1607097 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.21-0 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.21-0 succeeded
	I0630 15:43:17.447313 1607097 cache.go:87] Successfully saved all images to host disk.
	I0630 15:43:18.768723 1606244 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0e75711e052f5ff08b4874c1e1a030454394da0de01bd83e996bc7dfeb69fbe8 54f5bf55946371a7a40cb81e86a9c4594c8032cef2803ec2a11c21532b30020e 30d540adc0a81bb2bb85b1341e8019c85b1baa48fd7f7b05d1e70ae62f981045 e3a1648e9f667ab920e9b280ca62b03c31f604b4589b066cceec6ad98f8dca8a 18777898da228b40c88db48f0049ab54171035f532e73cec3e6a8b8bf1b7536c 8e2de606b3105608917c510bca7b948eaab4103dfdd766ee5c75522f2645b0a5 c76b6c9b4fc63777b1ae885d697d63d167f1a530ac20bf75e7e14dc1246c226e 3df91a997cdebad0fde37c949dd74769301e3b3f0249413cb1bbcb8aa9887166 815672f387c4b95e4da16be5abe4cdeaf1497cf56705c52931497b768032f234 551d5e606d038a7d34f15d615df7758b9763f4808693198dd8ab56377499478c: (15.065749307s)
	W0630 15:43:18.768815 1606244 kubeadm.go:640] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 0e75711e052f5ff08b4874c1e1a030454394da0de01bd83e996bc7dfeb69fbe8 54f5bf55946371a7a40cb81e86a9c4594c8032cef2803ec2a11c21532b30020e 30d540adc0a81bb2bb85b1341e8019c85b1baa48fd7f7b05d1e70ae62f981045 e3a1648e9f667ab920e9b280ca62b03c31f604b4589b066cceec6ad98f8dca8a 18777898da228b40c88db48f0049ab54171035f532e73cec3e6a8b8bf1b7536c 8e2de606b3105608917c510bca7b948eaab4103dfdd766ee5c75522f2645b0a5 c76b6c9b4fc63777b1ae885d697d63d167f1a530ac20bf75e7e14dc1246c226e 3df91a997cdebad0fde37c949dd74769301e3b3f0249413cb1bbcb8aa9887166 815672f387c4b95e4da16be5abe4cdeaf1497cf56705c52931497b768032f234 551d5e606d038a7d34f15d615df7758b9763f4808693198dd8ab56377499478c: Process exited with status 1
	stdout:
	0e75711e052f5ff08b4874c1e1a030454394da0de01bd83e996bc7dfeb69fbe8
	54f5bf55946371a7a40cb81e86a9c4594c8032cef2803ec2a11c21532b30020e
	30d540adc0a81bb2bb85b1341e8019c85b1baa48fd7f7b05d1e70ae62f981045
	e3a1648e9f667ab920e9b280ca62b03c31f604b4589b066cceec6ad98f8dca8a
	18777898da228b40c88db48f0049ab54171035f532e73cec3e6a8b8bf1b7536c
	8e2de606b3105608917c510bca7b948eaab4103dfdd766ee5c75522f2645b0a5
	c76b6c9b4fc63777b1ae885d697d63d167f1a530ac20bf75e7e14dc1246c226e
	3df91a997cdebad0fde37c949dd74769301e3b3f0249413cb1bbcb8aa9887166
	
	stderr:
	E0630 15:43:18.764218    4667 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"815672f387c4b95e4da16be5abe4cdeaf1497cf56705c52931497b768032f234\": container with ID starting with 815672f387c4b95e4da16be5abe4cdeaf1497cf56705c52931497b768032f234 not found: ID does not exist" containerID="815672f387c4b95e4da16be5abe4cdeaf1497cf56705c52931497b768032f234"
	time="2025-06-30T15:43:18Z" level=fatal msg="stopping the container \"815672f387c4b95e4da16be5abe4cdeaf1497cf56705c52931497b768032f234\": rpc error: code = NotFound desc = could not find container \"815672f387c4b95e4da16be5abe4cdeaf1497cf56705c52931497b768032f234\": container with ID starting with 815672f387c4b95e4da16be5abe4cdeaf1497cf56705c52931497b768032f234 not found: ID does not exist"
	I0630 15:43:18.768944 1606244 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0630 15:43:18.826126 1606244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:43:18.839109 1606244 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jun 30 15:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jun 30 15:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5741 Jun 30 15:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 30 15:42 /etc/kubernetes/scheduler.conf
	
	I0630 15:43:18.839224 1606244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:43:18.850868 1606244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:43:18.863200 1606244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:43:18.863333 1606244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:43:18.877211 1606244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:43:18.890633 1606244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:43:18.890721 1606244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:43:18.903792 1606244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:43:18.915825 1606244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:43:18.915891 1606244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:43:18.927603 1606244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:43:18.939039 1606244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:43:18.994757 1606244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:43:20.162039 1606244 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.167232s)
	I0630 15:43:20.162078 1606244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:43:20.470417 1606244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:43:20.549984 1606244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:43:20.630522 1606244 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:43:20.630631 1606244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:43:16.829672 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:16.830349 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:16.830372 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:16.830258 1606791 retry.go:31] will retry after 1.939367083s: waiting for domain to come up
	I0630 15:43:18.772504 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:18.773128 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:18.773158 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:18.773077 1606791 retry.go:31] will retry after 2.327621357s: waiting for domain to come up
	I0630 15:43:21.131161 1606244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:43:21.630811 1606244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:43:21.654353 1606244 api_server.go:72] duration metric: took 1.02383156s to wait for apiserver process to appear ...
	I0630 15:43:21.654386 1606244 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:43:21.654421 1606244 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8443/healthz ...
	I0630 15:43:24.095447 1606244 api_server.go:279] https://192.168.50.75:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0630 15:43:24.095480 1606244 api_server.go:103] status: https://192.168.50.75:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0630 15:43:24.095497 1606244 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8443/healthz ...
	I0630 15:43:24.115674 1606244 api_server.go:279] https://192.168.50.75:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0630 15:43:24.115706 1606244 api_server.go:103] status: https://192.168.50.75:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0630 15:43:24.155010 1606244 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8443/healthz ...
	I0630 15:43:24.203771 1606244 api_server.go:279] https://192.168.50.75:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0630 15:43:24.203808 1606244 api_server.go:103] status: https://192.168.50.75:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0630 15:43:24.655471 1606244 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8443/healthz ...
	I0630 15:43:24.664389 1606244 api_server.go:279] https://192.168.50.75:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0630 15:43:24.664425 1606244 api_server.go:103] status: https://192.168.50.75:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0630 15:43:25.155147 1606244 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8443/healthz ...
	I0630 15:43:25.165491 1606244 api_server.go:279] https://192.168.50.75:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0630 15:43:25.165534 1606244 api_server.go:103] status: https://192.168.50.75:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0630 15:43:25.655318 1606244 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8443/healthz ...
	I0630 15:43:25.660255 1606244 api_server.go:279] https://192.168.50.75:8443/healthz returned 200:
	ok
	I0630 15:43:25.667351 1606244 api_server.go:141] control plane version: v1.33.2
	I0630 15:43:25.667389 1606244 api_server.go:131] duration metric: took 4.01299606s to wait for apiserver health ...
	I0630 15:43:25.667401 1606244 cni.go:84] Creating CNI manager for ""
	I0630 15:43:25.667409 1606244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:43:25.669503 1606244 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 15:43:25.671253 1606244 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 15:43:25.684872 1606244 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 15:43:25.706837 1606244 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:43:25.712313 1606244 system_pods.go:59] 8 kube-system pods found
	I0630 15:43:25.712383 1606244 system_pods.go:61] "coredns-674b8bbfcf-2kw7f" [cd0e21d5-9107-4de3-aa2b-18e8ed18e670] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:43:25.712398 1606244 system_pods.go:61] "coredns-674b8bbfcf-cnz6s" [1bdab480-d413-484c-820f-251129741c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:43:25.712410 1606244 system_pods.go:61] "etcd-kubernetes-upgrade-691468" [7dff12ea-8de3-4bf8-85ca-f8043e068376] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0630 15:43:25.712421 1606244 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-691468" [af184f7a-8b11-45c3-95d8-4381c35fbbfc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:43:25.712436 1606244 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-691468" [4d369f5a-dd2e-421f-828f-8c00c8a2173a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:43:25.712447 1606244 system_pods.go:61] "kube-proxy-mn9sk" [25bdd073-2b5d-4633-af42-5f629fbdd0b4] Running
	I0630 15:43:25.712456 1606244 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-691468" [20533a90-3156-49e1-af30-07986bd7ba7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:43:25.712469 1606244 system_pods.go:61] "storage-provisioner" [45f816e0-ab27-4a9b-8c81-6c1499e999e8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:43:25.712480 1606244 system_pods.go:74] duration metric: took 5.609228ms to wait for pod list to return data ...
	I0630 15:43:25.712496 1606244 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:43:25.716611 1606244 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:43:25.716651 1606244 node_conditions.go:123] node cpu capacity is 2
	I0630 15:43:25.716670 1606244 node_conditions.go:105] duration metric: took 4.165218ms to run NodePressure ...
	I0630 15:43:25.716699 1606244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:43:21.102553 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:21.103146 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:21.103171 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:21.103110 1606791 retry.go:31] will retry after 3.90700663s: waiting for domain to come up
	I0630 15:43:25.014358 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:25.014841 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:25.014901 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:25.014822 1606791 retry.go:31] will retry after 4.834626086s: waiting for domain to come up
	I0630 15:43:26.019860 1606244 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:43:26.042869 1606244 ops.go:34] apiserver oom_adj: -16
	I0630 15:43:26.042898 1606244 kubeadm.go:593] duration metric: took 22.432920242s to restartPrimaryControlPlane
	I0630 15:43:26.042911 1606244 kubeadm.go:394] duration metric: took 22.622515941s to StartCluster
	I0630 15:43:26.042934 1606244 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:26.043039 1606244 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:43:26.044157 1606244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:26.044495 1606244 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.75 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:43:26.044631 1606244 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:43:26.044733 1606244 config.go:182] Loaded profile config "kubernetes-upgrade-691468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:43:26.044756 1606244 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-691468"
	I0630 15:43:26.044784 1606244 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-691468"
	W0630 15:43:26.044818 1606244 addons.go:247] addon storage-provisioner should already be in state true
	I0630 15:43:26.044856 1606244 host.go:66] Checking if "kubernetes-upgrade-691468" exists ...
	I0630 15:43:26.044795 1606244 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-691468"
	I0630 15:43:26.044933 1606244 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-691468"
	I0630 15:43:26.045258 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:43:26.045305 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:43:26.045353 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:43:26.045433 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:43:26.046113 1606244 out.go:177] * Verifying Kubernetes components...
	I0630 15:43:26.047628 1606244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:43:26.062979 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0630 15:43:26.063127 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I0630 15:43:26.063553 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:43:26.063779 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:43:26.064354 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:43:26.064389 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:43:26.064669 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:43:26.064693 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:43:26.064814 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:43:26.065168 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:43:26.065371 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetState
	I0630 15:43:26.065578 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:43:26.065637 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:43:26.068392 1606244 kapi.go:59] client config for kubernetes-upgrade-691468: &rest.Config{Host:"https://192.168.50.75:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/client.crt", KeyFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/client.key", CAFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x258ff00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0630 15:43:26.068785 1606244 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-691468"
	W0630 15:43:26.068810 1606244 addons.go:247] addon default-storageclass should already be in state true
	I0630 15:43:26.068849 1606244 host.go:66] Checking if "kubernetes-upgrade-691468" exists ...
	I0630 15:43:26.069251 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:43:26.069435 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:43:26.082961 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0630 15:43:26.083668 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:43:26.084193 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:43:26.084227 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:43:26.084713 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:43:26.084935 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetState
	I0630 15:43:26.085220 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36013
	I0630 15:43:26.085835 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:43:26.086472 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:43:26.086492 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:43:26.086888 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:43:26.087129 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:43:26.087500 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:43:26.087537 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:43:26.089148 1606244 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:43:26.090385 1606244 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:43:26.090402 1606244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 15:43:26.090425 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:43:26.094006 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:43:26.094404 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:43:26.094432 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:43:26.094666 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:43:26.094881 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:43:26.095044 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:43:26.095195 1606244 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:43:26.105618 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0630 15:43:26.106303 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:43:26.107005 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:43:26.107039 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:43:26.107466 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:43:26.107641 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetState
	I0630 15:43:26.110887 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:43:26.111148 1606244 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 15:43:26.111171 1606244 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 15:43:26.111193 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:43:26.114717 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:43:26.115259 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:43:26.115292 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:43:26.115539 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:43:26.115772 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:43:26.115957 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:43:26.116084 1606244 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:43:26.329349 1606244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:43:26.351089 1606244 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:43:26.351208 1606244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:43:26.372808 1606244 api_server.go:72] duration metric: took 328.271564ms to wait for apiserver process to appear ...
	I0630 15:43:26.372839 1606244 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:43:26.372862 1606244 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8443/healthz ...
	I0630 15:43:26.381391 1606244 api_server.go:279] https://192.168.50.75:8443/healthz returned 200:
	ok
	I0630 15:43:26.382497 1606244 api_server.go:141] control plane version: v1.33.2
	I0630 15:43:26.382520 1606244 api_server.go:131] duration metric: took 9.672932ms to wait for apiserver health ...
	I0630 15:43:26.382530 1606244 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:43:26.388882 1606244 system_pods.go:59] 8 kube-system pods found
	I0630 15:43:26.388927 1606244 system_pods.go:61] "coredns-674b8bbfcf-2kw7f" [cd0e21d5-9107-4de3-aa2b-18e8ed18e670] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:43:26.388934 1606244 system_pods.go:61] "coredns-674b8bbfcf-cnz6s" [1bdab480-d413-484c-820f-251129741c5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:43:26.388957 1606244 system_pods.go:61] "etcd-kubernetes-upgrade-691468" [7dff12ea-8de3-4bf8-85ca-f8043e068376] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0630 15:43:26.388965 1606244 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-691468" [af184f7a-8b11-45c3-95d8-4381c35fbbfc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:43:26.388972 1606244 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-691468" [4d369f5a-dd2e-421f-828f-8c00c8a2173a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:43:26.388979 1606244 system_pods.go:61] "kube-proxy-mn9sk" [25bdd073-2b5d-4633-af42-5f629fbdd0b4] Running
	I0630 15:43:26.388984 1606244 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-691468" [20533a90-3156-49e1-af30-07986bd7ba7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:43:26.388988 1606244 system_pods.go:61] "storage-provisioner" [45f816e0-ab27-4a9b-8c81-6c1499e999e8] Running
	I0630 15:43:26.388995 1606244 system_pods.go:74] duration metric: took 6.460403ms to wait for pod list to return data ...
	I0630 15:43:26.389012 1606244 kubeadm.go:578] duration metric: took 344.481601ms to wait for: map[apiserver:true system_pods:true]
	I0630 15:43:26.389027 1606244 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:43:26.393202 1606244 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:43:26.393231 1606244 node_conditions.go:123] node cpu capacity is 2
	I0630 15:43:26.393242 1606244 node_conditions.go:105] duration metric: took 4.208348ms to run NodePressure ...
	I0630 15:43:26.393253 1606244 start.go:241] waiting for startup goroutines ...
	I0630 15:43:26.438480 1606244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:43:26.478639 1606244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 15:43:27.097730 1606244 main.go:141] libmachine: Making call to close driver server
	I0630 15:43:27.097762 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .Close
	I0630 15:43:27.097786 1606244 main.go:141] libmachine: Making call to close driver server
	I0630 15:43:27.097811 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .Close
	I0630 15:43:27.098066 1606244 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:43:27.098084 1606244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:43:27.098094 1606244 main.go:141] libmachine: Making call to close driver server
	I0630 15:43:27.098159 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .Close
	I0630 15:43:27.098223 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | Closing plugin on server side
	I0630 15:43:27.098241 1606244 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:43:27.098252 1606244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:43:27.098261 1606244 main.go:141] libmachine: Making call to close driver server
	I0630 15:43:27.098274 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .Close
	I0630 15:43:27.098471 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | Closing plugin on server side
	I0630 15:43:27.098515 1606244 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:43:27.098558 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | Closing plugin on server side
	I0630 15:43:27.098577 1606244 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:43:27.098596 1606244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:43:27.098564 1606244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:43:27.107073 1606244 main.go:141] libmachine: Making call to close driver server
	I0630 15:43:27.107124 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .Close
	I0630 15:43:27.107621 1606244 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:43:27.107628 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | Closing plugin on server side
	I0630 15:43:27.107649 1606244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:43:27.109600 1606244 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0630 15:43:27.110898 1606244 addons.go:514] duration metric: took 1.066278402s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0630 15:43:27.110937 1606244 start.go:246] waiting for cluster config update ...
	I0630 15:43:27.110949 1606244 start.go:255] writing updated cluster config ...
	I0630 15:43:27.111178 1606244 ssh_runner.go:195] Run: rm -f paused
	I0630 15:43:27.164768 1606244 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:43:27.166823 1606244 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-691468" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 30 15:43:27 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:27.981086392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298207981066014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5875b25a-bf5e-4409-bac1-14954638deca name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:27 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:27.981687267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3df18e9-f303-4c40-9bc2-c75c14934d99 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:27 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:27.981736676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3df18e9-f303-4c40-9bc2-c75c14934d99 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:27 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:27.982050082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f4c20744eaff1a1ed779d082d0ad540b16868fb790d5e8ba3fbd57ff0be0f9f,PodSandboxId:4d3165d1333bf00ee799c161a0d1a30676c3eabcc4f34f7c0a0dc057c721878e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751298204951898984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f816e0-ab27-4a9b-8c81-6c1499e999e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe7cd57db362e8e99172c223f16834cc04a445522b9373bf1d709c38568a42c,PodSandboxId:7846ca80c209b162c4aff1202978821f8278b4fa95724231ab2836c202f3620b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298204933308122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-cnz6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdab480-d413-484c-820f-251129741c5c,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa51cabb13f9af353178fb615ae1835e20bbcbbd317d728855e29ad89b02644,PodSandboxId:b50e1b6ec39288d58b1c4db282e652e6b7716da6bca02572d5c418ad69017685,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298204902297803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2kw7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cd0e21d5-9107-4de3-aa2b-18e8ed18e670,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8b6bc3a96c8176910916bd5977c09c83d84a03b5bc46de46614ce5ba17dfe8,PodSandboxId:16c6b737a817c6bf1fccc5f213e7078d6637eaeb7e6fd8b4e1c8f4991882d77b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNI
NG,CreatedAt:1751298201173306548,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca12521b66e1a3a48038cebf06cab875,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ba19905a0b1755b8131dbe6c0cdaeb486b77c708f6a83bb2a48462d70f6c22,PodSandboxId:57ac2d90b4c026e0fa9f4d87019347adf8ed95ca527165378db8c64f764a2e0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,Cr
eatedAt:1751298201180299688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0c2856800ec19d9a85aa691e024c0f,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a26cfb4096139c2a7e9a7e7ef9dacb587e3fc3d6ff90ec6fecf5300f658e895,PodSandboxId:42a8b03520c5c663f765afa9b04aca7f6a292805d1ec647fac7177b1accf1e73,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298
201076060844,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef4bb9cd4893eac1791e60d5f561f3d,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0300f2ae8114bc0d332514aec9392244c75fdf97c6fd5521d6649fde15395579,PodSandboxId:8d2494ffb0f1bb528c6370d7d9f63be971d7808c4bae83f2369a4646bdaf03ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298201079675599,Labe
ls:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96bb7d5abd62e01621daae35e583bb83,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e08bd4ed1c9dc71f66224e478b8e660f271321b0251a99731b962b828fc27f1,PodSandboxId:08712fd8ac8df19335acc318acc46ec7364c6fee4c2187210ba1df306d098086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298197198533
519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn9sk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25bdd073-2b5d-4633-af42-5f629fbdd0b4,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e75711e052f5ff08b4874c1e1a030454394da0de01bd83e996bc7dfeb69fbe8,PodSandboxId:b50e1b6ec39288d58b1c4db282e652e6b7716da6bca02572d5c418ad69017685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298183319547560,Labels:map[string]string{io.kuber
netes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2kw7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd0e21d5-9107-4de3-aa2b-18e8ed18e670,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5bf55946371a7a40cb81e86a9c4594c8032cef2803ec2a11c21532b30020e,PodSandboxId:7846ca80c209b162c4aff1202978821f8278b4fa95724231ab2836c202f3620b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298183180012355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-cnz6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdab480-d413-484c-820f-251129741c5c,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d540adc0a81bb2bb85b1341e8019c85b1baa48fd7f7b05d1e70ae62f981045,PodSandboxId:a7ab7243fb8ef7805fa215125ee529f5d574e39be81023ea
13ac63c4f91fbd7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751298179624331124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f816e0-ab27-4a9b-8c81-6c1499e999e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1648e9f667ab920e9b280ca62b03c31f604b4589b066cceec6ad98f8dca8a,PodSandboxId:73c29fd54746a55445e4f517d6a55d0a7de592a6854dee4de3caf28a7e8e4
839,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298179619030701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn9sk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25bdd073-2b5d-4633-af42-5f629fbdd0b4,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18777898da228b40c88db48f0049ab54171035f532e73cec3e6a8b8bf1b7536c,PodSandboxId:c4bc5a604b98d2b0bac4e9af63633a4751de38aa97f1355b2367ef04f7add4a9,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298179516608695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca12521b66e1a3a48038cebf06cab875,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2de606b3105608917c510bca7b948eaab4103dfdd766ee5c75522f2645b0a5,PodSandboxId:f352669433159c7ea84cde0535efd44bfeb6b4b03d7b9389b91889269b78b03e,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298179289906525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0c2856800ec19d9a85aa691e024c0f,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76b6c9b4fc63777b1ae885d697d63d167f1a530ac20bf75e7e14dc1246c226e,PodSandboxId:b45b7d35161a338aeaf3fdf04e8807f4c9ad0dc4c9704bd33dce0c23b534e92f,Metadata:&ContainerMetadata{Name:etcd,Atte
mpt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298179072776464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef4bb9cd4893eac1791e60d5f561f3d,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df91a997cdebad0fde37c949dd74769301e3b3f0249413cb1bbcb8aa9887166,PodSandboxId:5ee48f28ebdd3fac27974d974741f345214165d56868a32b501a7785f359e8c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Ima
geSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298178862427506,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96bb7d5abd62e01621daae35e583bb83,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3df18e9-f303-4c40-9bc2-c75c14934d99 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.024273766Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef82eb57-7eab-45d6-99d8-ce8e586d7532 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.024353621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef82eb57-7eab-45d6-99d8-ce8e586d7532 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.025448555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ec571a4-58de-4691-9de0-e26668fc2ae4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.025855295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298208025831255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ec571a4-58de-4691-9de0-e26668fc2ae4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.028535505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=483ad36a-b50b-4fa8-bc45-4ca10dcf3873 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.028715367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=483ad36a-b50b-4fa8-bc45-4ca10dcf3873 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.029319663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f4c20744eaff1a1ed779d082d0ad540b16868fb790d5e8ba3fbd57ff0be0f9f,PodSandboxId:4d3165d1333bf00ee799c161a0d1a30676c3eabcc4f34f7c0a0dc057c721878e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751298204951898984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f816e0-ab27-4a9b-8c81-6c1499e999e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe7cd57db362e8e99172c223f16834cc04a445522b9373bf1d709c38568a42c,PodSandboxId:7846ca80c209b162c4aff1202978821f8278b4fa95724231ab2836c202f3620b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298204933308122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-cnz6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdab480-d413-484c-820f-251129741c5c,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa51cabb13f9af353178fb615ae1835e20bbcbbd317d728855e29ad89b02644,PodSandboxId:b50e1b6ec39288d58b1c4db282e652e6b7716da6bca02572d5c418ad69017685,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298204902297803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2kw7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cd0e21d5-9107-4de3-aa2b-18e8ed18e670,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8b6bc3a96c8176910916bd5977c09c83d84a03b5bc46de46614ce5ba17dfe8,PodSandboxId:16c6b737a817c6bf1fccc5f213e7078d6637eaeb7e6fd8b4e1c8f4991882d77b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNI
NG,CreatedAt:1751298201173306548,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca12521b66e1a3a48038cebf06cab875,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ba19905a0b1755b8131dbe6c0cdaeb486b77c708f6a83bb2a48462d70f6c22,PodSandboxId:57ac2d90b4c026e0fa9f4d87019347adf8ed95ca527165378db8c64f764a2e0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,Cr
eatedAt:1751298201180299688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0c2856800ec19d9a85aa691e024c0f,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a26cfb4096139c2a7e9a7e7ef9dacb587e3fc3d6ff90ec6fecf5300f658e895,PodSandboxId:42a8b03520c5c663f765afa9b04aca7f6a292805d1ec647fac7177b1accf1e73,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298
201076060844,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef4bb9cd4893eac1791e60d5f561f3d,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0300f2ae8114bc0d332514aec9392244c75fdf97c6fd5521d6649fde15395579,PodSandboxId:8d2494ffb0f1bb528c6370d7d9f63be971d7808c4bae83f2369a4646bdaf03ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298201079675599,Labe
ls:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96bb7d5abd62e01621daae35e583bb83,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e08bd4ed1c9dc71f66224e478b8e660f271321b0251a99731b962b828fc27f1,PodSandboxId:08712fd8ac8df19335acc318acc46ec7364c6fee4c2187210ba1df306d098086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298197198533
519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn9sk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25bdd073-2b5d-4633-af42-5f629fbdd0b4,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e75711e052f5ff08b4874c1e1a030454394da0de01bd83e996bc7dfeb69fbe8,PodSandboxId:b50e1b6ec39288d58b1c4db282e652e6b7716da6bca02572d5c418ad69017685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298183319547560,Labels:map[string]string{io.kuber
netes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2kw7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd0e21d5-9107-4de3-aa2b-18e8ed18e670,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5bf55946371a7a40cb81e86a9c4594c8032cef2803ec2a11c21532b30020e,PodSandboxId:7846ca80c209b162c4aff1202978821f8278b4fa95724231ab2836c202f3620b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298183180012355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-cnz6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdab480-d413-484c-820f-251129741c5c,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d540adc0a81bb2bb85b1341e8019c85b1baa48fd7f7b05d1e70ae62f981045,PodSandboxId:a7ab7243fb8ef7805fa215125ee529f5d574e39be81023ea
13ac63c4f91fbd7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751298179624331124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f816e0-ab27-4a9b-8c81-6c1499e999e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1648e9f667ab920e9b280ca62b03c31f604b4589b066cceec6ad98f8dca8a,PodSandboxId:73c29fd54746a55445e4f517d6a55d0a7de592a6854dee4de3caf28a7e8e4
839,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298179619030701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn9sk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25bdd073-2b5d-4633-af42-5f629fbdd0b4,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18777898da228b40c88db48f0049ab54171035f532e73cec3e6a8b8bf1b7536c,PodSandboxId:c4bc5a604b98d2b0bac4e9af63633a4751de38aa97f1355b2367ef04f7add4a9,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298179516608695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca12521b66e1a3a48038cebf06cab875,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2de606b3105608917c510bca7b948eaab4103dfdd766ee5c75522f2645b0a5,PodSandboxId:f352669433159c7ea84cde0535efd44bfeb6b4b03d7b9389b91889269b78b03e,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298179289906525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0c2856800ec19d9a85aa691e024c0f,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76b6c9b4fc63777b1ae885d697d63d167f1a530ac20bf75e7e14dc1246c226e,PodSandboxId:b45b7d35161a338aeaf3fdf04e8807f4c9ad0dc4c9704bd33dce0c23b534e92f,Metadata:&ContainerMetadata{Name:etcd,Atte
mpt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298179072776464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef4bb9cd4893eac1791e60d5f561f3d,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df91a997cdebad0fde37c949dd74769301e3b3f0249413cb1bbcb8aa9887166,PodSandboxId:5ee48f28ebdd3fac27974d974741f345214165d56868a32b501a7785f359e8c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Ima
geSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298178862427506,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96bb7d5abd62e01621daae35e583bb83,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=483ad36a-b50b-4fa8-bc45-4ca10dcf3873 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.076185336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b49c8ab-baae-4ead-ab48-895d95947ed8 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.076253641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b49c8ab-baae-4ead-ab48-895d95947ed8 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.078483891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b744b6b0-29d6-431b-844c-5e192e234c14 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.079154862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298208079071331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b744b6b0-29d6-431b-844c-5e192e234c14 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.080225368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60f79be6-f45e-44e5-8b3c-964fe0afdfc1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.080276262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60f79be6-f45e-44e5-8b3c-964fe0afdfc1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.080650126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f4c20744eaff1a1ed779d082d0ad540b16868fb790d5e8ba3fbd57ff0be0f9f,PodSandboxId:4d3165d1333bf00ee799c161a0d1a30676c3eabcc4f34f7c0a0dc057c721878e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751298204951898984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f816e0-ab27-4a9b-8c81-6c1499e999e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe7cd57db362e8e99172c223f16834cc04a445522b9373bf1d709c38568a42c,PodSandboxId:7846ca80c209b162c4aff1202978821f8278b4fa95724231ab2836c202f3620b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298204933308122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-cnz6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdab480-d413-484c-820f-251129741c5c,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa51cabb13f9af353178fb615ae1835e20bbcbbd317d728855e29ad89b02644,PodSandboxId:b50e1b6ec39288d58b1c4db282e652e6b7716da6bca02572d5c418ad69017685,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298204902297803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2kw7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cd0e21d5-9107-4de3-aa2b-18e8ed18e670,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8b6bc3a96c8176910916bd5977c09c83d84a03b5bc46de46614ce5ba17dfe8,PodSandboxId:16c6b737a817c6bf1fccc5f213e7078d6637eaeb7e6fd8b4e1c8f4991882d77b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNI
NG,CreatedAt:1751298201173306548,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca12521b66e1a3a48038cebf06cab875,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ba19905a0b1755b8131dbe6c0cdaeb486b77c708f6a83bb2a48462d70f6c22,PodSandboxId:57ac2d90b4c026e0fa9f4d87019347adf8ed95ca527165378db8c64f764a2e0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,Cr
eatedAt:1751298201180299688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0c2856800ec19d9a85aa691e024c0f,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a26cfb4096139c2a7e9a7e7ef9dacb587e3fc3d6ff90ec6fecf5300f658e895,PodSandboxId:42a8b03520c5c663f765afa9b04aca7f6a292805d1ec647fac7177b1accf1e73,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298
201076060844,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef4bb9cd4893eac1791e60d5f561f3d,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0300f2ae8114bc0d332514aec9392244c75fdf97c6fd5521d6649fde15395579,PodSandboxId:8d2494ffb0f1bb528c6370d7d9f63be971d7808c4bae83f2369a4646bdaf03ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298201079675599,Labe
ls:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96bb7d5abd62e01621daae35e583bb83,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e08bd4ed1c9dc71f66224e478b8e660f271321b0251a99731b962b828fc27f1,PodSandboxId:08712fd8ac8df19335acc318acc46ec7364c6fee4c2187210ba1df306d098086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298197198533
519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn9sk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25bdd073-2b5d-4633-af42-5f629fbdd0b4,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e75711e052f5ff08b4874c1e1a030454394da0de01bd83e996bc7dfeb69fbe8,PodSandboxId:b50e1b6ec39288d58b1c4db282e652e6b7716da6bca02572d5c418ad69017685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298183319547560,Labels:map[string]string{io.kuber
netes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2kw7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd0e21d5-9107-4de3-aa2b-18e8ed18e670,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5bf55946371a7a40cb81e86a9c4594c8032cef2803ec2a11c21532b30020e,PodSandboxId:7846ca80c209b162c4aff1202978821f8278b4fa95724231ab2836c202f3620b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298183180012355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-cnz6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdab480-d413-484c-820f-251129741c5c,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d540adc0a81bb2bb85b1341e8019c85b1baa48fd7f7b05d1e70ae62f981045,PodSandboxId:a7ab7243fb8ef7805fa215125ee529f5d574e39be81023ea
13ac63c4f91fbd7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751298179624331124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f816e0-ab27-4a9b-8c81-6c1499e999e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1648e9f667ab920e9b280ca62b03c31f604b4589b066cceec6ad98f8dca8a,PodSandboxId:73c29fd54746a55445e4f517d6a55d0a7de592a6854dee4de3caf28a7e8e4
839,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298179619030701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn9sk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25bdd073-2b5d-4633-af42-5f629fbdd0b4,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18777898da228b40c88db48f0049ab54171035f532e73cec3e6a8b8bf1b7536c,PodSandboxId:c4bc5a604b98d2b0bac4e9af63633a4751de38aa97f1355b2367ef04f7add4a9,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298179516608695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca12521b66e1a3a48038cebf06cab875,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2de606b3105608917c510bca7b948eaab4103dfdd766ee5c75522f2645b0a5,PodSandboxId:f352669433159c7ea84cde0535efd44bfeb6b4b03d7b9389b91889269b78b03e,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298179289906525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0c2856800ec19d9a85aa691e024c0f,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76b6c9b4fc63777b1ae885d697d63d167f1a530ac20bf75e7e14dc1246c226e,PodSandboxId:b45b7d35161a338aeaf3fdf04e8807f4c9ad0dc4c9704bd33dce0c23b534e92f,Metadata:&ContainerMetadata{Name:etcd,Atte
mpt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298179072776464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef4bb9cd4893eac1791e60d5f561f3d,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df91a997cdebad0fde37c949dd74769301e3b3f0249413cb1bbcb8aa9887166,PodSandboxId:5ee48f28ebdd3fac27974d974741f345214165d56868a32b501a7785f359e8c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Ima
geSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298178862427506,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96bb7d5abd62e01621daae35e583bb83,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60f79be6-f45e-44e5-8b3c-964fe0afdfc1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.116802200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=287debdd-a3f9-4f8e-ae2a-c99620d724f3 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.116877224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=287debdd-a3f9-4f8e-ae2a-c99620d724f3 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.118031077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8e3abde-50db-4669-bcf7-5a6fa31628d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.118473039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298208118450637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8e3abde-50db-4669-bcf7-5a6fa31628d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.119558784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f321b3b-5cf0-4da4-879b-67218261dad4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.119650996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f321b3b-5cf0-4da4-879b-67218261dad4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:28 kubernetes-upgrade-691468 crio[3677]: time="2025-06-30 15:43:28.120200182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f4c20744eaff1a1ed779d082d0ad540b16868fb790d5e8ba3fbd57ff0be0f9f,PodSandboxId:4d3165d1333bf00ee799c161a0d1a30676c3eabcc4f34f7c0a0dc057c721878e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1751298204951898984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f816e0-ab27-4a9b-8c81-6c1499e999e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fe7cd57db362e8e99172c223f16834cc04a445522b9373bf1d709c38568a42c,PodSandboxId:7846ca80c209b162c4aff1202978821f8278b4fa95724231ab2836c202f3620b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298204933308122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-cnz6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdab480-d413-484c-820f-251129741c5c,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa51cabb13f9af353178fb615ae1835e20bbcbbd317d728855e29ad89b02644,PodSandboxId:b50e1b6ec39288d58b1c4db282e652e6b7716da6bca02572d5c418ad69017685,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298204902297803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2kw7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cd0e21d5-9107-4de3-aa2b-18e8ed18e670,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8b6bc3a96c8176910916bd5977c09c83d84a03b5bc46de46614ce5ba17dfe8,PodSandboxId:16c6b737a817c6bf1fccc5f213e7078d6637eaeb7e6fd8b4e1c8f4991882d77b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNI
NG,CreatedAt:1751298201173306548,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca12521b66e1a3a48038cebf06cab875,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ba19905a0b1755b8131dbe6c0cdaeb486b77c708f6a83bb2a48462d70f6c22,PodSandboxId:57ac2d90b4c026e0fa9f4d87019347adf8ed95ca527165378db8c64f764a2e0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,Cr
eatedAt:1751298201180299688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0c2856800ec19d9a85aa691e024c0f,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a26cfb4096139c2a7e9a7e7ef9dacb587e3fc3d6ff90ec6fecf5300f658e895,PodSandboxId:42a8b03520c5c663f765afa9b04aca7f6a292805d1ec647fac7177b1accf1e73,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298
201076060844,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef4bb9cd4893eac1791e60d5f561f3d,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0300f2ae8114bc0d332514aec9392244c75fdf97c6fd5521d6649fde15395579,PodSandboxId:8d2494ffb0f1bb528c6370d7d9f63be971d7808c4bae83f2369a4646bdaf03ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298201079675599,Labe
ls:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96bb7d5abd62e01621daae35e583bb83,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e08bd4ed1c9dc71f66224e478b8e660f271321b0251a99731b962b828fc27f1,PodSandboxId:08712fd8ac8df19335acc318acc46ec7364c6fee4c2187210ba1df306d098086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298197198533
519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn9sk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25bdd073-2b5d-4633-af42-5f629fbdd0b4,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e75711e052f5ff08b4874c1e1a030454394da0de01bd83e996bc7dfeb69fbe8,PodSandboxId:b50e1b6ec39288d58b1c4db282e652e6b7716da6bca02572d5c418ad69017685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298183319547560,Labels:map[string]string{io.kuber
netes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-2kw7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd0e21d5-9107-4de3-aa2b-18e8ed18e670,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f5bf55946371a7a40cb81e86a9c4594c8032cef2803ec2a11c21532b30020e,PodSandboxId:7846ca80c209b162c4aff1202978821f8278b4fa95724231ab2836c202f3620b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298183180012355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-cnz6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdab480-d413-484c-820f-251129741c5c,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d540adc0a81bb2bb85b1341e8019c85b1baa48fd7f7b05d1e70ae62f981045,PodSandboxId:a7ab7243fb8ef7805fa215125ee529f5d574e39be81023ea
13ac63c4f91fbd7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1751298179624331124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f816e0-ab27-4a9b-8c81-6c1499e999e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1648e9f667ab920e9b280ca62b03c31f604b4589b066cceec6ad98f8dca8a,PodSandboxId:73c29fd54746a55445e4f517d6a55d0a7de592a6854dee4de3caf28a7e8e4
839,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298179619030701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mn9sk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25bdd073-2b5d-4633-af42-5f629fbdd0b4,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18777898da228b40c88db48f0049ab54171035f532e73cec3e6a8b8bf1b7536c,PodSandboxId:c4bc5a604b98d2b0bac4e9af63633a4751de38aa97f1355b2367ef04f7add4a9,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298179516608695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca12521b66e1a3a48038cebf06cab875,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2de606b3105608917c510bca7b948eaab4103dfdd766ee5c75522f2645b0a5,PodSandboxId:f352669433159c7ea84cde0535efd44bfeb6b4b03d7b9389b91889269b78b03e,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298179289906525,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0c2856800ec19d9a85aa691e024c0f,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c76b6c9b4fc63777b1ae885d697d63d167f1a530ac20bf75e7e14dc1246c226e,PodSandboxId:b45b7d35161a338aeaf3fdf04e8807f4c9ad0dc4c9704bd33dce0c23b534e92f,Metadata:&ContainerMetadata{Name:etcd,Atte
mpt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298179072776464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef4bb9cd4893eac1791e60d5f561f3d,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df91a997cdebad0fde37c949dd74769301e3b3f0249413cb1bbcb8aa9887166,PodSandboxId:5ee48f28ebdd3fac27974d974741f345214165d56868a32b501a7785f359e8c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Ima
geSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298178862427506,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-691468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96bb7d5abd62e01621daae35e583bb83,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f321b3b-5cf0-4da4-879b-67218261dad4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2f4c20744eaff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   4d3165d1333bf       storage-provisioner
	4fe7cd57db362       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   3 seconds ago       Running             coredns                   2                   7846ca80c209b       coredns-674b8bbfcf-cnz6s
	6aa51cabb13f9       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   3 seconds ago       Running             coredns                   2                   b50e1b6ec3928       coredns-674b8bbfcf-2kw7f
	39ba19905a0b1       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e   7 seconds ago       Running             kube-apiserver            2                   57ac2d90b4c02       kube-apiserver-kubernetes-upgrade-691468
	ae8b6bc3a96c8       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b   7 seconds ago       Running             kube-scheduler            2                   16c6b737a817c       kube-scheduler-kubernetes-upgrade-691468
	0300f2ae8114b       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2   7 seconds ago       Running             kube-controller-manager   2                   8d2494ffb0f1b       kube-controller-manager-kubernetes-upgrade-691468
	4a26cfb409613       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   7 seconds ago       Running             etcd                      2                   42a8b03520c5c       etcd-kubernetes-upgrade-691468
	2e08bd4ed1c9d       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19   11 seconds ago      Running             kube-proxy                2                   08712fd8ac8df       kube-proxy-mn9sk
	0e75711e052f5       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   24 seconds ago      Exited              coredns                   1                   b50e1b6ec3928       coredns-674b8bbfcf-2kw7f
	54f5bf5594637       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   25 seconds ago      Exited              coredns                   1                   7846ca80c209b       coredns-674b8bbfcf-cnz6s
	30d540adc0a81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   28 seconds ago      Exited              storage-provisioner       2                   a7ab7243fb8ef       storage-provisioner
	e3a1648e9f667       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19   28 seconds ago      Exited              kube-proxy                1                   73c29fd54746a       kube-proxy-mn9sk
	18777898da228       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b   28 seconds ago      Exited              kube-scheduler            1                   c4bc5a604b98d       kube-scheduler-kubernetes-upgrade-691468
	8e2de606b3105       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e   28 seconds ago      Exited              kube-apiserver            1                   f352669433159       kube-apiserver-kubernetes-upgrade-691468
	c76b6c9b4fc63       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   29 seconds ago      Exited              etcd                      1                   b45b7d35161a3       etcd-kubernetes-upgrade-691468
	3df91a997cdeb       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2   29 seconds ago      Exited              kube-controller-manager   1                   5ee48f28ebdd3       kube-controller-manager-kubernetes-upgrade-691468
	
	
	==> coredns [0e75711e052f5ff08b4874c1e1a030454394da0de01bd83e996bc7dfeb69fbe8] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4fe7cd57db362e8e99172c223f16834cc04a445522b9373bf1d709c38568a42c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	
	
	==> coredns [54f5bf55946371a7a40cb81e86a9c4594c8032cef2803ec2a11c21532b30020e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6aa51cabb13f9af353178fb615ae1835e20bbcbbd317d728855e29ad89b02644] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-691468
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-691468
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 15:42:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-691468
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 15:43:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 15:43:24 +0000   Mon, 30 Jun 2025 15:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 15:43:24 +0000   Mon, 30 Jun 2025 15:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 15:43:24 +0000   Mon, 30 Jun 2025 15:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 15:43:24 +0000   Mon, 30 Jun 2025 15:42:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.75
	  Hostname:    kubernetes-upgrade-691468
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	System Info:
	  Machine ID:                 5d06302eb6e14268b250dda71e8ce459
	  System UUID:                5d06302e-b6e1-4268-b250-dda71e8ce459
	  Boot ID:                    1ba19e7c-942e-4bf8-afad-5217d97c4c88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-674b8bbfcf-2kw7f                             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     53s
	  kube-system                 coredns-674b8bbfcf-cnz6s                             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     53s
	  kube-system                 etcd-kubernetes-upgrade-691468                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         50s
	  kube-system                 kube-apiserver-kubernetes-upgrade-691468             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-691468    200m (10%)    0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-proxy-mn9sk                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-kubernetes-upgrade-691468             100m (5%)     0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (8%)  340Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node kubernetes-upgrade-691468 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node kubernetes-upgrade-691468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x7 over 65s)  kubelet          Node kubernetes-upgrade-691468 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node kubernetes-upgrade-691468 event: Registered Node kubernetes-upgrade-691468 in Controller
	  Normal  CIDRAssignmentFailed     54s                cidrAllocator    Node kubernetes-upgrade-691468 status is now: CIDRAssignmentFailed
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-691468 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-691468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-691468 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-691468 event: Registered Node kubernetes-upgrade-691468 in Controller
	
	
	==> dmesg <==
	[Jun30 15:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jun30 15:42] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002550] (rpcbind)[142]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.967539] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.101796] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.104306] kauditd_printk_skb: 46 callbacks suppressed
	[  +1.245680] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.122901] kauditd_printk_skb: 109 callbacks suppressed
	[Jun30 15:43] kauditd_printk_skb: 278 callbacks suppressed
	[  +5.002997] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [4a26cfb4096139c2a7e9a7e7ef9dacb587e3fc3d6ff90ec6fecf5300f658e895] <==
	{"level":"info","ts":"2025-06-30T15:43:21.457606Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-06-30T15:43:21.457718Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-06-30T15:43:21.457748Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-06-30T15:43:21.457980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 switched to configuration voters=(3905519800180279527)"}
	{"level":"info","ts":"2025-06-30T15:43:21.458114Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5bdbf71200db9bfc","local-member-id":"36333190e10008e7","added-peer-id":"36333190e10008e7","added-peer-peer-urls":["https://192.168.50.75:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-06-30T15:43:21.458317Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"5bdbf71200db9bfc","local-member-id":"36333190e10008e7","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:43:21.459005Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:43:21.458710Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.50.75:2380"}
	{"level":"info","ts":"2025-06-30T15:43:21.460519Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.50.75:2380"}
	{"level":"info","ts":"2025-06-30T15:43:22.817178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 is starting a new election at term 2"}
	{"level":"info","ts":"2025-06-30T15:43:22.817257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-06-30T15:43:22.817306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 received MsgPreVoteResp from 36333190e10008e7 at term 2"}
	{"level":"info","ts":"2025-06-30T15:43:22.817324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 became candidate at term 3"}
	{"level":"info","ts":"2025-06-30T15:43:22.817458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 received MsgVoteResp from 36333190e10008e7 at term 3"}
	{"level":"info","ts":"2025-06-30T15:43:22.817488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 became leader at term 3"}
	{"level":"info","ts":"2025-06-30T15:43:22.817549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 36333190e10008e7 elected leader 36333190e10008e7 at term 3"}
	{"level":"info","ts":"2025-06-30T15:43:22.820890Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:43:22.821846Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T15:43:22.822742Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.75:2379"}
	{"level":"info","ts":"2025-06-30T15:43:22.823097Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:43:22.823811Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T15:43:22.824725Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T15:43:22.826464Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"36333190e10008e7","local-member-attributes":"{Name:kubernetes-upgrade-691468 ClientURLs:[https://192.168.50.75:2379]}","request-path":"/0/members/36333190e10008e7/attributes","cluster-id":"5bdbf71200db9bfc","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T15:43:22.833365Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T15:43:22.833485Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [c76b6c9b4fc63777b1ae885d697d63d167f1a530ac20bf75e7e14dc1246c226e] <==
	{"level":"info","ts":"2025-06-30T15:43:00.063519Z","caller":"etcdserver/raft.go:541","msg":"restarting local member","cluster-id":"5bdbf71200db9bfc","local-member-id":"36333190e10008e7","commit-index":419}
	{"level":"info","ts":"2025-06-30T15:43:00.063679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 switched to configuration voters=()"}
	{"level":"info","ts":"2025-06-30T15:43:00.063720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 became follower at term 2"}
	{"level":"info","ts":"2025-06-30T15:43:00.063753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 36333190e10008e7 [peers: [], term: 2, commit: 419, applied: 0, lastindex: 419, lastterm: 2]"}
	{"level":"warn","ts":"2025-06-30T15:43:00.087192Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-06-30T15:43:00.125229Z","caller":"mvcc/kvstore.go:425","msg":"kvstore restored","current-rev":408}
	{"level":"info","ts":"2025-06-30T15:43:00.125307Z","caller":"etcdserver/server.go:628","msg":"restore consistentIndex","index":419}
	{"level":"info","ts":"2025-06-30T15:43:00.139515Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-06-30T15:43:00.153164Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"36333190e10008e7","timeout":"7s"}
	{"level":"info","ts":"2025-06-30T15:43:00.157742Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"36333190e10008e7"}
	{"level":"info","ts":"2025-06-30T15:43:00.157809Z","caller":"etcdserver/server.go:875","msg":"starting etcd server","local-member-id":"36333190e10008e7","local-server-version":"3.5.21","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-06-30T15:43:00.158461Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T15:43:00.161098Z","caller":"etcdserver/server.go:775","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-06-30T15:43:00.165809Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-06-30T15:43:00.165929Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-06-30T15:43:00.165942Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-06-30T15:43:00.166162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 switched to configuration voters=(3905519800180279527)"}
	{"level":"info","ts":"2025-06-30T15:43:00.166211Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5bdbf71200db9bfc","local-member-id":"36333190e10008e7","added-peer-id":"36333190e10008e7","added-peer-peer-urls":["https://192.168.50.75:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-06-30T15:43:00.166284Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"5bdbf71200db9bfc","local-member-id":"36333190e10008e7","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:43:00.166311Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-30T15:43:00.175991Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-06-30T15:43:00.177835Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"36333190e10008e7","initial-advertise-peer-urls":["https://192.168.50.75:2380"],"listen-peer-urls":["https://192.168.50.75:2380"],"advertise-client-urls":["https://192.168.50.75:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.75:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-06-30T15:43:00.177879Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-06-30T15:43:00.177947Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.50.75:2380"}
	{"level":"info","ts":"2025-06-30T15:43:00.177962Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.50.75:2380"}
	
	
	==> kernel <==
	 15:43:28 up 1 min,  0 users,  load average: 1.39, 0.42, 0.14
	Linux kubernetes-upgrade-691468 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [39ba19905a0b1755b8131dbe6c0cdaeb486b77c708f6a83bb2a48462d70f6c22] <==
	I0630 15:43:24.191915       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0630 15:43:24.197502       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0630 15:43:24.200913       1 shared_informer.go:357] "Caches are synced" controller="crd-autoregister"
	I0630 15:43:24.201031       1 aggregator.go:171] initial CRD sync complete...
	I0630 15:43:24.201055       1 autoregister_controller.go:144] Starting autoregister controller
	I0630 15:43:24.201060       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0630 15:43:24.201065       1 cache.go:39] Caches are synced for autoregister controller
	I0630 15:43:24.204249       1 shared_informer.go:357] "Caches are synced" controller="node_authorizer"
	I0630 15:43:24.212895       1 shared_informer.go:357] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0630 15:43:24.212978       1 policy_source.go:240] refreshing policies
	E0630 15:43:24.213618       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0630 15:43:24.229486       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 15:43:24.257075       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0630 15:43:24.259130       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0630 15:43:24.259348       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0630 15:43:24.637628       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0630 15:43:25.083182       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0630 15:43:25.145198       1 controller.go:667] quota admission added evaluator for: endpoints
	I0630 15:43:25.846553       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0630 15:43:25.902845       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0630 15:43:25.967918       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0630 15:43:25.997622       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0630 15:43:27.685968       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 15:43:27.831368       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0630 15:43:27.930824       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [8e2de606b3105608917c510bca7b948eaab4103dfdd766ee5c75522f2645b0a5] <==
	I0630 15:43:00.307466       1 options.go:249] external host was not specified, using 192.168.50.75
	I0630 15:43:00.321729       1 server.go:147] Version: v1.33.2
	I0630 15:43:00.321801       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [0300f2ae8114bc0d332514aec9392244c75fdf97c6fd5521d6649fde15395579] <==
	I0630 15:43:27.424913       1 shared_informer.go:357] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0630 15:43:27.425602       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0630 15:43:27.431299       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0630 15:43:27.434330       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0630 15:43:27.437642       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0630 15:43:27.455455       1 shared_informer.go:357] "Caches are synced" controller="PV protection"
	I0630 15:43:27.478045       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0630 15:43:27.493072       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0630 15:43:27.505168       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 15:43:27.516501       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0630 15:43:27.530214       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0630 15:43:27.576076       1 shared_informer.go:357] "Caches are synced" controller="expand"
	I0630 15:43:27.587566       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 15:43:27.620710       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 15:43:27.644424       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0630 15:43:27.681564       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0630 15:43:27.700143       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 15:43:27.702551       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0630 15:43:27.702892       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-691468"
	I0630 15:43:27.724304       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0630 15:43:27.734687       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 15:43:28.159767       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 15:43:28.159830       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 15:43:28.159839       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0630 15:43:28.169585       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [3df91a997cdebad0fde37c949dd74769301e3b3f0249413cb1bbcb8aa9887166] <==
	
	
	==> kube-proxy [2e08bd4ed1c9dc71f66224e478b8e660f271321b0251a99731b962b828fc27f1] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0630 15:43:17.357990       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-691468\": dial tcp 192.168.50.75:8443: connect: connection refused"
	E0630 15:43:18.398480       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-691468\": dial tcp 192.168.50.75:8443: connect: connection refused"
	E0630 15:43:20.656899       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-691468\": dial tcp 192.168.50.75:8443: connect: connection refused"
	I0630 15:43:25.245592       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.50.75"]
	E0630 15:43:25.245660       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 15:43:25.289846       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 15:43:25.289916       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 15:43:25.289941       1 server_linux.go:145] "Using iptables Proxier"
	I0630 15:43:25.303917       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 15:43:25.305120       1 server.go:516] "Version info" version="v1.33.2"
	I0630 15:43:25.305315       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 15:43:25.311155       1 config.go:199] "Starting service config controller"
	I0630 15:43:25.311963       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 15:43:25.312023       1 config.go:105] "Starting endpoint slice config controller"
	I0630 15:43:25.312040       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 15:43:25.312064       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 15:43:25.312079       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 15:43:25.312613       1 config.go:329] "Starting node config controller"
	I0630 15:43:25.313450       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 15:43:25.412597       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 15:43:25.412720       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 15:43:25.412701       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 15:43:25.413575       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [e3a1648e9f667ab920e9b280ca62b03c31f604b4589b066cceec6ad98f8dca8a] <==
	
	
	==> kube-scheduler [18777898da228b40c88db48f0049ab54171035f532e73cec3e6a8b8bf1b7536c] <==
	
	
	==> kube-scheduler [ae8b6bc3a96c8176910916bd5977c09c83d84a03b5bc46de46614ce5ba17dfe8] <==
	I0630 15:43:22.489659       1 serving.go:386] Generated self-signed cert in-memory
	W0630 15:43:24.116959       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0630 15:43:24.117034       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0630 15:43:24.117045       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 15:43:24.117051       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 15:43:24.216017       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 15:43:24.216280       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 15:43:24.219623       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 15:43:24.219662       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 15:43:24.220454       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 15:43:24.220707       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0630 15:43:24.320597       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 15:43:23 kubernetes-upgrade-691468 kubelet[4837]: E0630 15:43:23.908760    4837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-691468\" not found" node="kubernetes-upgrade-691468"
	Jun 30 15:43:23 kubernetes-upgrade-691468 kubelet[4837]: E0630 15:43:23.909539    4837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-691468\" not found" node="kubernetes-upgrade-691468"
	Jun 30 15:43:23 kubernetes-upgrade-691468 kubelet[4837]: E0630 15:43:23.909832    4837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-691468\" not found" node="kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: E0630 15:43:24.084681    4837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-691468\" not found" node="kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.196463    4837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.321050    4837 kubelet_node_status.go:124] "Node was previously registered" node="kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.321237    4837 kubelet_node_status.go:78] "Successfully registered node" node="kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.321334    4837 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.322916    4837 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: E0630 15:43:24.325557    4837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-691468\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.325753    4837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: E0630 15:43:24.341326    4837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-691468\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.341418    4837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: E0630 15:43:24.353445    4837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-691468\" already exists" pod="kube-system/etcd-kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.353472    4837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: E0630 15:43:24.364051    4837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-691468\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-691468"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.568622    4837 apiserver.go:52] "Watching apiserver"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.597059    4837 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.627210    4837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/45f816e0-ab27-4a9b-8c81-6c1499e999e8-tmp\") pod \"storage-provisioner\" (UID: \"45f816e0-ab27-4a9b-8c81-6c1499e999e8\") " pod="kube-system/storage-provisioner"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.627631    4837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25bdd073-2b5d-4633-af42-5f629fbdd0b4-xtables-lock\") pod \"kube-proxy-mn9sk\" (UID: \"25bdd073-2b5d-4633-af42-5f629fbdd0b4\") " pod="kube-system/kube-proxy-mn9sk"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.627776    4837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25bdd073-2b5d-4633-af42-5f629fbdd0b4-lib-modules\") pod \"kube-proxy-mn9sk\" (UID: \"25bdd073-2b5d-4633-af42-5f629fbdd0b4\") " pod="kube-system/kube-proxy-mn9sk"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.883232    4837 scope.go:117] "RemoveContainer" containerID="0e75711e052f5ff08b4874c1e1a030454394da0de01bd83e996bc7dfeb69fbe8"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.891041    4837 scope.go:117] "RemoveContainer" containerID="54f5bf55946371a7a40cb81e86a9c4594c8032cef2803ec2a11c21532b30020e"
	Jun 30 15:43:24 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:24.897085    4837 scope.go:117] "RemoveContainer" containerID="30d540adc0a81bb2bb85b1341e8019c85b1baa48fd7f7b05d1e70ae62f981045"
	Jun 30 15:43:26 kubernetes-upgrade-691468 kubelet[4837]: I0630 15:43:26.939045    4837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [2f4c20744eaff1a1ed779d082d0ad540b16868fb790d5e8ba3fbd57ff0be0f9f] <==
	I0630 15:43:25.118370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0630 15:43:25.135432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0630 15:43:25.135562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0630 15:43:25.142616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 15:43:25.161624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0630 15:43:25.161774       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0630 15:43:25.161911       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-691468_db690bcf-fe13-4be4-8c03-3c72656b8101!
	I0630 15:43:25.162921       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"021a94f7-0b52-44ae-810a-b60ba5f16da4", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-691468_db690bcf-fe13-4be4-8c03-3c72656b8101 became leader
	W0630 15:43:25.171931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 15:43:25.178357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0630 15:43:25.263550       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-691468_db690bcf-fe13-4be4-8c03-3c72656b8101!
	W0630 15:43:27.182362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 15:43:27.198363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [30d540adc0a81bb2bb85b1341e8019c85b1baa48fd7f7b05d1e70ae62f981045] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-691468 -n kubernetes-upgrade-691468
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-691468 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-691468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-691468
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-691468: (1.026305088s)
--- FAIL: TestKubernetesUpgrade (421.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (91.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-011818 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-011818 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.096175457s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-011818] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-011818" primary control-plane node in "pause-011818" cluster
	* Updating the running kvm2 "pause-011818" VM ...
	* Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-011818" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:41:33.773088 1605445 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:41:33.773202 1605445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:41:33.773208 1605445 out.go:358] Setting ErrFile to fd 2...
	I0630 15:41:33.773214 1605445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:41:33.773484 1605445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:41:33.774085 1605445 out.go:352] Setting JSON to false
	I0630 15:41:33.775158 1605445 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33786,"bootTime":1751264308,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:41:33.775285 1605445 start.go:140] virtualization: kvm guest
	I0630 15:41:33.777445 1605445 out.go:177] * [pause-011818] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:41:33.779047 1605445 notify.go:220] Checking for updates...
	I0630 15:41:33.779072 1605445 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:41:33.780694 1605445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:41:33.782273 1605445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:41:33.784228 1605445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:41:33.785776 1605445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:41:33.787162 1605445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:41:33.789143 1605445 config.go:182] Loaded profile config "pause-011818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:41:33.789821 1605445 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:41:33.789898 1605445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:41:33.807715 1605445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35499
	I0630 15:41:33.808304 1605445 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:41:33.808807 1605445 main.go:141] libmachine: Using API Version  1
	I0630 15:41:33.808843 1605445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:41:33.809240 1605445 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:41:33.809452 1605445 main.go:141] libmachine: (pause-011818) Calling .DriverName
	I0630 15:41:33.809815 1605445 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:41:33.810104 1605445 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:41:33.810179 1605445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:41:33.827497 1605445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0630 15:41:33.828131 1605445 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:41:33.828678 1605445 main.go:141] libmachine: Using API Version  1
	I0630 15:41:33.828703 1605445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:41:33.829280 1605445 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:41:33.829517 1605445 main.go:141] libmachine: (pause-011818) Calling .DriverName
	I0630 15:41:33.872395 1605445 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 15:41:33.873657 1605445 start.go:304] selected driver: kvm2
	I0630 15:41:33.873674 1605445 start.go:908] validating driver "kvm2" against &{Name:pause-011818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterN
ame:pause-011818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:41:33.873807 1605445 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:41:33.874157 1605445 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:41:33.874245 1605445 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:41:33.891205 1605445 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:41:33.892299 1605445 cni.go:84] Creating CNI manager for ""
	I0630 15:41:33.892376 1605445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:41:33.892460 1605445 start.go:347] cluster config:
	{Name:pause-011818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:pause-011818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:41:33.892655 1605445 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:41:33.894868 1605445 out.go:177] * Starting "pause-011818" primary control-plane node in "pause-011818" cluster
	I0630 15:41:33.896185 1605445 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:41:33.896223 1605445 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 15:41:33.896233 1605445 cache.go:56] Caching tarball of preloaded images
	I0630 15:41:33.896341 1605445 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:41:33.896356 1605445 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 15:41:33.896474 1605445 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/config.json ...
	I0630 15:41:33.896674 1605445 start.go:360] acquireMachinesLock for pause-011818: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:42:13.639792 1605445 start.go:364] duration metric: took 39.74308555s to acquireMachinesLock for "pause-011818"
	I0630 15:42:13.639855 1605445 start.go:96] Skipping create...Using existing machine configuration
	I0630 15:42:13.639864 1605445 fix.go:54] fixHost starting: 
	I0630 15:42:13.640342 1605445 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:42:13.640400 1605445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:42:13.662070 1605445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I0630 15:42:13.662755 1605445 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:42:13.663326 1605445 main.go:141] libmachine: Using API Version  1
	I0630 15:42:13.663350 1605445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:42:13.663718 1605445 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:42:13.663905 1605445 main.go:141] libmachine: (pause-011818) Calling .DriverName
	I0630 15:42:13.664115 1605445 main.go:141] libmachine: (pause-011818) Calling .GetState
	I0630 15:42:13.666904 1605445 fix.go:112] recreateIfNeeded on pause-011818: state=Running err=<nil>
	W0630 15:42:13.666942 1605445 fix.go:138] unexpected machine state, will restart: <nil>
	I0630 15:42:13.668816 1605445 out.go:177] * Updating the running kvm2 "pause-011818" VM ...
	I0630 15:42:13.670498 1605445 machine.go:93] provisionDockerMachine start ...
	I0630 15:42:13.670569 1605445 main.go:141] libmachine: (pause-011818) Calling .DriverName
	I0630 15:42:13.671018 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:13.675586 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:13.676317 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:13.676428 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:13.676901 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHPort
	I0630 15:42:13.677130 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:13.677320 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:13.677605 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHUsername
	I0630 15:42:13.677796 1605445 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:13.678124 1605445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0630 15:42:13.678140 1605445 main.go:141] libmachine: About to run SSH command:
	hostname
	I0630 15:42:13.798849 1605445 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-011818
	
	I0630 15:42:13.798890 1605445 main.go:141] libmachine: (pause-011818) Calling .GetMachineName
	I0630 15:42:13.799156 1605445 buildroot.go:166] provisioning hostname "pause-011818"
	I0630 15:42:13.799235 1605445 main.go:141] libmachine: (pause-011818) Calling .GetMachineName
	I0630 15:42:13.799435 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:13.802902 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:13.803323 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:13.803373 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:13.803715 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHPort
	I0630 15:42:13.803964 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:13.804105 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:13.804306 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHUsername
	I0630 15:42:13.804523 1605445 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:13.804750 1605445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0630 15:42:13.804767 1605445 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-011818 && echo "pause-011818" | sudo tee /etc/hostname
	I0630 15:42:13.944810 1605445 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-011818
	
	I0630 15:42:13.944841 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:13.948543 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:13.949056 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:13.949089 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:13.949382 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHPort
	I0630 15:42:13.949683 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:13.949935 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:13.950088 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHUsername
	I0630 15:42:13.950348 1605445 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:13.950662 1605445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0630 15:42:13.950688 1605445 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-011818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-011818/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-011818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:42:14.072049 1605445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:42:14.072091 1605445 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:42:14.072158 1605445 buildroot.go:174] setting up certificates
	I0630 15:42:14.072174 1605445 provision.go:84] configureAuth start
	I0630 15:42:14.072191 1605445 main.go:141] libmachine: (pause-011818) Calling .GetMachineName
	I0630 15:42:14.073223 1605445 main.go:141] libmachine: (pause-011818) Calling .GetIP
	I0630 15:42:14.076635 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:14.077020 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:14.077067 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:14.077271 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:14.080571 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:14.081026 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:14.081049 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:14.081309 1605445 provision.go:143] copyHostCerts
	I0630 15:42:14.081382 1605445 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:42:14.081419 1605445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:42:14.081496 1605445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:42:14.081651 1605445 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:42:14.081667 1605445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:42:14.081703 1605445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:42:14.081824 1605445 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:42:14.081836 1605445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:42:14.081859 1605445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:42:14.081940 1605445 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.pause-011818 san=[127.0.0.1 192.168.61.93 localhost minikube pause-011818]
	I0630 15:42:14.572927 1605445 provision.go:177] copyRemoteCerts
	I0630 15:42:14.572997 1605445 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:42:14.573024 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:14.578692 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:14.670915 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:14.670958 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:14.673798 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHPort
	I0630 15:42:14.674063 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:14.674278 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHUsername
	I0630 15:42:14.674449 1605445 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/pause-011818/id_rsa Username:docker}
	I0630 15:42:14.766451 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:42:14.799272 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0630 15:42:14.832319 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0630 15:42:14.871633 1605445 provision.go:87] duration metric: took 799.439728ms to configureAuth
	I0630 15:42:14.871675 1605445 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:42:14.871965 1605445 config.go:182] Loaded profile config "pause-011818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:42:14.872063 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:15.233742 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:15.234230 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:15.234273 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:15.234394 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHPort
	I0630 15:42:15.234667 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:15.234834 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:15.234963 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHUsername
	I0630 15:42:15.235154 1605445 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:15.235452 1605445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0630 15:42:15.235474 1605445 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:42:22.592129 1605445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:42:22.592162 1605445 machine.go:96] duration metric: took 8.921621129s to provisionDockerMachine
	I0630 15:42:22.592177 1605445 start.go:293] postStartSetup for "pause-011818" (driver="kvm2")
	I0630 15:42:22.592192 1605445 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:42:22.592215 1605445 main.go:141] libmachine: (pause-011818) Calling .DriverName
	I0630 15:42:22.592658 1605445 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:42:22.592693 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:22.595869 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.596586 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:22.596614 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.596848 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHPort
	I0630 15:42:22.597052 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:22.597218 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHUsername
	I0630 15:42:22.597370 1605445 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/pause-011818/id_rsa Username:docker}
	I0630 15:42:22.685446 1605445 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:42:22.690455 1605445 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:42:22.690489 1605445 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:42:22.690570 1605445 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:42:22.690652 1605445 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:42:22.690739 1605445 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:42:22.702216 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:42:22.734131 1605445 start.go:296] duration metric: took 141.936333ms for postStartSetup
	I0630 15:42:22.734180 1605445 fix.go:56] duration metric: took 9.094317304s for fixHost
	I0630 15:42:22.734203 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:22.737883 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.738289 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:22.738326 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.738484 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHPort
	I0630 15:42:22.738696 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:22.738893 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:22.739058 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHUsername
	I0630 15:42:22.739269 1605445 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:22.739528 1605445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.93 22 <nil> <nil>}
	I0630 15:42:22.739548 1605445 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:42:22.855220 1605445 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298142.851507154
	
	I0630 15:42:22.855261 1605445 fix.go:216] guest clock: 1751298142.851507154
	I0630 15:42:22.855272 1605445 fix.go:229] Guest: 2025-06-30 15:42:22.851507154 +0000 UTC Remote: 2025-06-30 15:42:22.734184575 +0000 UTC m=+49.007197189 (delta=117.322579ms)
	I0630 15:42:22.855302 1605445 fix.go:200] guest clock delta is within tolerance: 117.322579ms
	I0630 15:42:22.855309 1605445 start.go:83] releasing machines lock for "pause-011818", held for 9.21547852s
	I0630 15:42:22.855342 1605445 main.go:141] libmachine: (pause-011818) Calling .DriverName
	I0630 15:42:22.855646 1605445 main.go:141] libmachine: (pause-011818) Calling .GetIP
	I0630 15:42:22.859254 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.859825 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:22.859871 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.860050 1605445 main.go:141] libmachine: (pause-011818) Calling .DriverName
	I0630 15:42:22.860743 1605445 main.go:141] libmachine: (pause-011818) Calling .DriverName
	I0630 15:42:22.860978 1605445 main.go:141] libmachine: (pause-011818) Calling .DriverName
	I0630 15:42:22.861099 1605445 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:42:22.861153 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:22.861235 1605445 ssh_runner.go:195] Run: cat /version.json
	I0630 15:42:22.861262 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHHostname
	I0630 15:42:22.864368 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.864516 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.864823 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:22.864842 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.864877 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:22.864892 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:22.865099 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHPort
	I0630 15:42:22.865208 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHPort
	I0630 15:42:22.865277 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:22.865468 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHUsername
	I0630 15:42:22.865460 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHKeyPath
	I0630 15:42:22.865684 1605445 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/pause-011818/id_rsa Username:docker}
	I0630 15:42:22.865846 1605445 main.go:141] libmachine: (pause-011818) Calling .GetSSHUsername
	I0630 15:42:22.866039 1605445 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/pause-011818/id_rsa Username:docker}
	I0630 15:42:22.955554 1605445 ssh_runner.go:195] Run: systemctl --version
	I0630 15:42:22.995540 1605445 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:42:23.158188 1605445 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:42:23.166132 1605445 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:42:23.166211 1605445 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:42:23.182905 1605445 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0630 15:42:23.182944 1605445 start.go:495] detecting cgroup driver to use...
	I0630 15:42:23.183021 1605445 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:42:23.211664 1605445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:42:23.237581 1605445 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:42:23.237662 1605445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:42:23.255168 1605445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:42:23.270605 1605445 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:42:23.455377 1605445 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:42:23.632741 1605445 docker.go:246] disabling docker service ...
	I0630 15:42:23.632825 1605445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:42:23.663675 1605445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:42:23.683557 1605445 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:42:24.082554 1605445 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:42:24.484061 1605445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:42:24.504320 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:42:24.539391 1605445 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:42:24.539529 1605445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:24.555482 1605445 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:42:24.555616 1605445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:24.592073 1605445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:24.606065 1605445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:24.638294 1605445 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:42:24.683663 1605445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:24.726064 1605445 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:24.769266 1605445 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:24.812969 1605445 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:42:24.837150 1605445 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:42:24.864423 1605445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:42:25.307789 1605445 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:42:25.993304 1605445 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:42:25.993513 1605445 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:42:26.000493 1605445 start.go:563] Will wait 60s for crictl version
	I0630 15:42:26.000588 1605445 ssh_runner.go:195] Run: which crictl
	I0630 15:42:26.005037 1605445 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:42:26.058630 1605445 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:42:26.058719 1605445 ssh_runner.go:195] Run: crio --version
	I0630 15:42:26.132329 1605445 ssh_runner.go:195] Run: crio --version
	I0630 15:42:26.211548 1605445 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:42:26.213547 1605445 main.go:141] libmachine: (pause-011818) Calling .GetIP
	I0630 15:42:26.217469 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:26.217865 1605445 main.go:141] libmachine: (pause-011818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:4f:cd", ip: ""} in network mk-pause-011818: {Iface:virbr3 ExpiryTime:2025-06-30 16:40:48 +0000 UTC Type:0 Mac:52:54:00:87:4f:cd Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:pause-011818 Clientid:01:52:54:00:87:4f:cd}
	I0630 15:42:26.217921 1605445 main.go:141] libmachine: (pause-011818) DBG | domain pause-011818 has defined IP address 192.168.61.93 and MAC address 52:54:00:87:4f:cd in network mk-pause-011818
	I0630 15:42:26.218350 1605445 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0630 15:42:26.227443 1605445 kubeadm.go:875] updating cluster {Name:pause-011818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:pause-01181
8 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:42:26.227617 1605445 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:42:26.227701 1605445 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:42:26.372491 1605445 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:42:26.372516 1605445 crio.go:433] Images already preloaded, skipping extraction
	I0630 15:42:26.372590 1605445 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:42:26.486158 1605445 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:42:26.486188 1605445 cache_images.go:84] Images are preloaded, skipping loading
	I0630 15:42:26.486203 1605445 kubeadm.go:926] updating node { 192.168.61.93 8443 v1.33.2 crio true true} ...
	I0630 15:42:26.486347 1605445 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-011818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:pause-011818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 15:42:26.486438 1605445 ssh_runner.go:195] Run: crio config
	I0630 15:42:26.666358 1605445 cni.go:84] Creating CNI manager for ""
	I0630 15:42:26.666388 1605445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:42:26.666401 1605445 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:42:26.666432 1605445 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.93 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-011818 NodeName:pause-011818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:42:26.666608 1605445 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-011818"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.93"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.93"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:42:26.666688 1605445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 15:42:26.688109 1605445 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:42:26.688204 1605445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:42:26.705477 1605445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0630 15:42:26.752061 1605445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:42:26.780341 1605445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0630 15:42:26.807830 1605445 ssh_runner.go:195] Run: grep 192.168.61.93	control-plane.minikube.internal$ /etc/hosts
	I0630 15:42:26.815963 1605445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:42:27.050756 1605445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:42:27.075544 1605445 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818 for IP: 192.168.61.93
	I0630 15:42:27.075579 1605445 certs.go:194] generating shared ca certs ...
	I0630 15:42:27.075604 1605445 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:27.075814 1605445 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:42:27.075871 1605445 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:42:27.075887 1605445 certs.go:256] generating profile certs ...
	I0630 15:42:27.076007 1605445 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/client.key
	I0630 15:42:27.076140 1605445 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/apiserver.key.11909051
	I0630 15:42:27.076206 1605445 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/proxy-client.key
	I0630 15:42:27.076386 1605445 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:42:27.076432 1605445 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:42:27.076450 1605445 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:42:27.076486 1605445 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:42:27.076520 1605445 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:42:27.076559 1605445 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:42:27.076618 1605445 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:42:27.077607 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:42:27.129743 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:42:27.192029 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:42:27.249165 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:42:27.302165 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 15:42:27.336323 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0630 15:42:27.374787 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:42:27.410385 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0630 15:42:27.457534 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:42:27.496015 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:42:27.527864 1605445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:42:27.572085 1605445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:42:27.600924 1605445 ssh_runner.go:195] Run: openssl version
	I0630 15:42:27.607286 1605445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:42:27.622976 1605445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:42:27.628299 1605445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:42:27.628392 1605445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:42:27.636870 1605445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:42:27.649196 1605445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:42:27.666869 1605445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:42:27.676030 1605445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:42:27.676135 1605445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:42:27.686913 1605445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:42:27.699984 1605445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:42:27.713257 1605445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:42:27.719909 1605445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:42:27.720005 1605445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:42:27.730251 1605445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:42:27.746351 1605445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:42:27.753126 1605445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0630 15:42:27.762998 1605445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0630 15:42:27.770451 1605445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0630 15:42:27.778728 1605445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0630 15:42:27.787360 1605445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0630 15:42:27.795222 1605445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0630 15:42:27.803921 1605445 kubeadm.go:392] StartCluster: {Name:pause-011818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:pause-011818 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:42:27.804089 1605445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:42:27.804189 1605445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:42:27.845751 1605445 cri.go:89] found id: "5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b"
	I0630 15:42:27.845785 1605445 cri.go:89] found id: "7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d"
	I0630 15:42:27.845790 1605445 cri.go:89] found id: "308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7"
	I0630 15:42:27.845797 1605445 cri.go:89] found id: "94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5"
	I0630 15:42:27.845801 1605445 cri.go:89] found id: "3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4"
	I0630 15:42:27.845805 1605445 cri.go:89] found id: "77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba"
	I0630 15:42:27.845808 1605445 cri.go:89] found id: "194e6ed98f266425217bba7b5865190a0df019b532fe72bedbecea0ea6f2b9a0"
	I0630 15:42:27.845811 1605445 cri.go:89] found id: ""
	I0630 15:42:27.845872 1605445 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-011818 -n pause-011818
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-011818 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-011818 logs -n 25: (1.643527544s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo cat                            | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo cat                            | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo cat                            | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo cat                            | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo find                           | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo crio                           | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-668101                                     | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:40 UTC |
	| start   | -p cert-expiration-775975                            | cert-expiration-775975    | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:41 UTC |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-185417                          | force-systemd-env-185417  | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:40 UTC |
	| start   | -p force-systemd-flag-632862                         | force-systemd-flag-632862 | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:42 UTC |
	|         | --memory=3072 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-691468                         | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:41 UTC | 30 Jun 25 15:41 UTC |
	| start   | -p kubernetes-upgrade-691468                         | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:41 UTC | 30 Jun 25 15:42 UTC |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-011818                                      | pause-011818              | jenkins | v1.36.0 | 30 Jun 25 15:41 UTC | 30 Jun 25 15:42 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-632862 ssh cat                    | force-systemd-flag-632862 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC | 30 Jun 25 15:42 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-632862                         | force-systemd-flag-632862 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC | 30 Jun 25 15:42 UTC |
	| start   | -p cert-options-329017                               | cert-options-329017       | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-691468                         | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-691468                         | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 15:42:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 15:42:35.853113 1606244 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:42:35.853436 1606244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:42:35.853448 1606244 out.go:358] Setting ErrFile to fd 2...
	I0630 15:42:35.853456 1606244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:42:35.853724 1606244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:42:35.854405 1606244 out.go:352] Setting JSON to false
	I0630 15:42:35.855594 1606244 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33848,"bootTime":1751264308,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:42:35.855670 1606244 start.go:140] virtualization: kvm guest
	I0630 15:42:35.858215 1606244 out.go:177] * [kubernetes-upgrade-691468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:42:35.860209 1606244 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:42:35.860228 1606244 notify.go:220] Checking for updates...
	I0630 15:42:35.865119 1606244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:42:35.866542 1606244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:42:35.868138 1606244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:42:35.869605 1606244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:42:35.871126 1606244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:42:35.873306 1606244 config.go:182] Loaded profile config "kubernetes-upgrade-691468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:42:35.873808 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:42:35.873880 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:42:35.891821 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37003
	I0630 15:42:35.892431 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:42:35.893175 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:42:35.893221 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:42:35.893689 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:42:35.893943 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:35.894255 1606244 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:42:35.894983 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:42:35.895077 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:42:35.915714 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0630 15:42:35.916489 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:42:35.917168 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:42:35.917198 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:42:35.917599 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:42:35.917861 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:35.967770 1606244 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 15:42:35.969920 1606244 start.go:304] selected driver: kvm2
	I0630 15:42:35.969952 1606244 start.go:908] validating driver "kvm2" against &{Name:kubernetes-upgrade-691468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
33.2 ClusterName:kubernetes-upgrade-691468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:42:35.970102 1606244 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:42:35.970886 1606244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:42:35.970991 1606244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:42:35.990124 1606244 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:42:35.990586 1606244 cni.go:84] Creating CNI manager for ""
	I0630 15:42:35.990645 1606244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:42:35.990696 1606244 start.go:347] cluster config:
	{Name:kubernetes-upgrade-691468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:kubernetes-upgrade-691468 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:42:35.990882 1606244 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:42:35.993369 1606244 out.go:177] * Starting "kubernetes-upgrade-691468" primary control-plane node in "kubernetes-upgrade-691468" cluster
	I0630 15:42:37.395279 1605445 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b 7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d 308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7 94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5 3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4 77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba 194e6ed98f266425217bba7b5865190a0df019b532fe72bedbecea0ea6f2b9a0: (9.391509525s)
	I0630 15:42:37.395375 1605445 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0630 15:42:37.457294 1605445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:42:37.473364 1605445 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jun 30 15:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jun 30 15:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1969 Jun 30 15:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 30 15:41 /etc/kubernetes/scheduler.conf
	
	I0630 15:42:37.473492 1605445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:42:37.488502 1605445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:42:37.502707 1605445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:42:37.502782 1605445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:42:37.517688 1605445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:42:37.531124 1605445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:42:37.531212 1605445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:42:37.547909 1605445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:42:37.560330 1605445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:42:37.560417 1605445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:42:37.573294 1605445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:42:37.584981 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:37.643100 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:36.008014 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:36.008818 1605949 main.go:141] libmachine: (cert-options-329017) DBG | unable to find current IP address of domain cert-options-329017 in network mk-cert-options-329017
	I0630 15:42:36.008838 1605949 main.go:141] libmachine: (cert-options-329017) DBG | I0630 15:42:36.008780 1606047 retry.go:31] will retry after 3.439221971s: waiting for domain to come up
	I0630 15:42:39.449354 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:39.450005 1605949 main.go:141] libmachine: (cert-options-329017) DBG | unable to find current IP address of domain cert-options-329017 in network mk-cert-options-329017
	I0630 15:42:39.450090 1605949 main.go:141] libmachine: (cert-options-329017) DBG | I0630 15:42:39.450001 1606047 retry.go:31] will retry after 3.302475314s: waiting for domain to come up
	I0630 15:42:35.995150 1606244 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:42:35.995350 1606244 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 15:42:35.995374 1606244 cache.go:56] Caching tarball of preloaded images
	I0630 15:42:35.995494 1606244 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:42:35.995512 1606244 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 15:42:35.995657 1606244 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/config.json ...
	I0630 15:42:35.995915 1606244 start.go:360] acquireMachinesLock for kubernetes-upgrade-691468: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:42:38.853268 1605445 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.210112749s)
	I0630 15:42:38.853316 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:39.132560 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:39.204851 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:39.295067 1605445 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:42:39.295187 1605445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:42:39.796163 1605445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:42:40.295601 1605445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:42:40.316394 1605445 api_server.go:72] duration metric: took 1.021328363s to wait for apiserver process to appear ...
	I0630 15:42:40.316425 1605445 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:42:40.316444 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:43.197448 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0630 15:42:43.197484 1605445 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0630 15:42:43.197505 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:43.242367 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0630 15:42:43.242397 1605445 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0630 15:42:43.316608 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:43.322258 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0630 15:42:43.322290 1605445 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0630 15:42:43.817512 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:43.822747 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0630 15:42:43.822783 1605445 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0630 15:42:44.317444 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:44.324363 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0630 15:42:44.333892 1605445 api_server.go:141] control plane version: v1.33.2
	I0630 15:42:44.333936 1605445 api_server.go:131] duration metric: took 4.017501906s to wait for apiserver health ...
	I0630 15:42:44.333949 1605445 cni.go:84] Creating CNI manager for ""
	I0630 15:42:44.333959 1605445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:42:44.336020 1605445 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 15:42:44.337442 1605445 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 15:42:44.351732 1605445 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 15:42:44.376840 1605445 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:42:44.380586 1605445 system_pods.go:59] 6 kube-system pods found
	I0630 15:42:44.380636 1605445 system_pods.go:61] "coredns-674b8bbfcf-m5x9v" [fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:42:44.380649 1605445 system_pods.go:61] "etcd-pause-011818" [4bcbec68-c5f1-4075-aea0-9886466aac76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0630 15:42:44.380663 1605445 system_pods.go:61] "kube-apiserver-pause-011818" [f4aac6bd-3a23-4c64-8ce4-09e508687d26] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:42:44.380678 1605445 system_pods.go:61] "kube-controller-manager-pause-011818" [18b5d2ba-f0f1-4f82-9e0c-7df77d432a19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:42:44.380692 1605445 system_pods.go:61] "kube-proxy-mgmjs" [8b8ac108-b30d-4905-8502-8bfde43240da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0630 15:42:44.380704 1605445 system_pods.go:61] "kube-scheduler-pause-011818" [d89944c8-73f1-42c8-bc87-b8bd6dfbe11b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:42:44.380718 1605445 system_pods.go:74] duration metric: took 3.845351ms to wait for pod list to return data ...
	I0630 15:42:44.380754 1605445 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:42:44.383534 1605445 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:42:44.383569 1605445 node_conditions.go:123] node cpu capacity is 2
	I0630 15:42:44.383587 1605445 node_conditions.go:105] duration metric: took 2.823647ms to run NodePressure ...
	I0630 15:42:44.383611 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:44.689248 1605445 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0630 15:42:44.694705 1605445 kubeadm.go:735] kubelet initialised
	I0630 15:42:44.694731 1605445 kubeadm.go:736] duration metric: took 5.449976ms waiting for restarted kubelet to initialise ...
	I0630 15:42:44.694749 1605445 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:42:44.710526 1605445 ops.go:34] apiserver oom_adj: -16
	I0630 15:42:44.710565 1605445 kubeadm.go:593] duration metric: took 16.796281707s to restartPrimaryControlPlane
	I0630 15:42:44.710579 1605445 kubeadm.go:394] duration metric: took 16.906670103s to StartCluster
	I0630 15:42:44.710607 1605445 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:44.710722 1605445 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:42:44.712076 1605445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:44.712469 1605445 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:42:44.712581 1605445 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:42:44.712733 1605445 config.go:182] Loaded profile config "pause-011818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:42:44.714734 1605445 out.go:177] * Verifying Kubernetes components...
	I0630 15:42:44.714734 1605445 out.go:177] * Enabled addons: 
	I0630 15:42:42.755130 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:42.755544 1605949 main.go:141] libmachine: (cert-options-329017) DBG | unable to find current IP address of domain cert-options-329017 in network mk-cert-options-329017
	I0630 15:42:42.755588 1605949 main.go:141] libmachine: (cert-options-329017) DBG | I0630 15:42:42.755503 1606047 retry.go:31] will retry after 4.405786509s: waiting for domain to come up
	I0630 15:42:44.715932 1605445 addons.go:514] duration metric: took 3.366717ms for enable addons: enabled=[]
	I0630 15:42:44.715957 1605445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:42:44.928469 1605445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:42:44.953138 1605445 node_ready.go:35] waiting up to 6m0s for node "pause-011818" to be "Ready" ...
	I0630 15:42:44.957122 1605445 node_ready.go:49] node "pause-011818" is "Ready"
	I0630 15:42:44.957161 1605445 node_ready.go:38] duration metric: took 3.965984ms for node "pause-011818" to be "Ready" ...
	I0630 15:42:44.957176 1605445 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:42:44.957236 1605445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:42:44.975894 1605445 api_server.go:72] duration metric: took 263.375426ms to wait for apiserver process to appear ...
	I0630 15:42:44.975922 1605445 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:42:44.975950 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:44.981961 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0630 15:42:44.983347 1605445 api_server.go:141] control plane version: v1.33.2
	I0630 15:42:44.983392 1605445 api_server.go:131] duration metric: took 7.451657ms to wait for apiserver health ...
	I0630 15:42:44.983406 1605445 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:42:44.989072 1605445 system_pods.go:59] 6 kube-system pods found
	I0630 15:42:44.989103 1605445 system_pods.go:61] "coredns-674b8bbfcf-m5x9v" [fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:42:44.989111 1605445 system_pods.go:61] "etcd-pause-011818" [4bcbec68-c5f1-4075-aea0-9886466aac76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0630 15:42:44.989118 1605445 system_pods.go:61] "kube-apiserver-pause-011818" [f4aac6bd-3a23-4c64-8ce4-09e508687d26] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:42:44.989125 1605445 system_pods.go:61] "kube-controller-manager-pause-011818" [18b5d2ba-f0f1-4f82-9e0c-7df77d432a19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:42:44.989130 1605445 system_pods.go:61] "kube-proxy-mgmjs" [8b8ac108-b30d-4905-8502-8bfde43240da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0630 15:42:44.989136 1605445 system_pods.go:61] "kube-scheduler-pause-011818" [d89944c8-73f1-42c8-bc87-b8bd6dfbe11b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:42:44.989152 1605445 system_pods.go:74] duration metric: took 5.738845ms to wait for pod list to return data ...
	I0630 15:42:44.989164 1605445 default_sa.go:34] waiting for default service account to be created ...
	I0630 15:42:44.991965 1605445 default_sa.go:45] found service account: "default"
	I0630 15:42:44.991998 1605445 default_sa.go:55] duration metric: took 2.825661ms for default service account to be created ...
	I0630 15:42:44.992013 1605445 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 15:42:44.995112 1605445 system_pods.go:86] 6 kube-system pods found
	I0630 15:42:44.995147 1605445 system_pods.go:89] "coredns-674b8bbfcf-m5x9v" [fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:42:44.995156 1605445 system_pods.go:89] "etcd-pause-011818" [4bcbec68-c5f1-4075-aea0-9886466aac76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0630 15:42:44.995164 1605445 system_pods.go:89] "kube-apiserver-pause-011818" [f4aac6bd-3a23-4c64-8ce4-09e508687d26] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:42:44.995170 1605445 system_pods.go:89] "kube-controller-manager-pause-011818" [18b5d2ba-f0f1-4f82-9e0c-7df77d432a19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:42:44.995175 1605445 system_pods.go:89] "kube-proxy-mgmjs" [8b8ac108-b30d-4905-8502-8bfde43240da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0630 15:42:44.995182 1605445 system_pods.go:89] "kube-scheduler-pause-011818" [d89944c8-73f1-42c8-bc87-b8bd6dfbe11b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:42:44.995191 1605445 system_pods.go:126] duration metric: took 3.170036ms to wait for k8s-apps to be running ...
	I0630 15:42:44.995198 1605445 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 15:42:44.995247 1605445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:42:45.013136 1605445 system_svc.go:56] duration metric: took 17.92073ms WaitForService to wait for kubelet
	I0630 15:42:45.013180 1605445 kubeadm.go:578] duration metric: took 300.667792ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:42:45.013210 1605445 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:42:45.016262 1605445 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:42:45.016298 1605445 node_conditions.go:123] node cpu capacity is 2
	I0630 15:42:45.016314 1605445 node_conditions.go:105] duration metric: took 3.097787ms to run NodePressure ...
	I0630 15:42:45.016333 1605445 start.go:241] waiting for startup goroutines ...
	I0630 15:42:45.016344 1605445 start.go:246] waiting for cluster config update ...
	I0630 15:42:45.016356 1605445 start.go:255] writing updated cluster config ...
	I0630 15:42:45.016738 1605445 ssh_runner.go:195] Run: rm -f paused
	I0630 15:42:45.022095 1605445 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:42:45.022815 1605445 kapi.go:59] client config for pause-011818: &rest.Config{Host:"https://192.168.61.93:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/client.crt", KeyFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/client.key", CAFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x258ff00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0630 15:42:45.025968 1605445 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-m5x9v" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:42:47.034205 1605445 pod_ready.go:104] pod "coredns-674b8bbfcf-m5x9v" is not "Ready", error: <nil>
	I0630 15:42:49.019206 1606244 start.go:364] duration metric: took 13.023203423s to acquireMachinesLock for "kubernetes-upgrade-691468"
	I0630 15:42:49.019281 1606244 start.go:96] Skipping create...Using existing machine configuration
	I0630 15:42:49.019290 1606244 fix.go:54] fixHost starting: 
	I0630 15:42:49.019723 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:42:49.019786 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:42:49.040153 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0630 15:42:49.040712 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:42:49.041240 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:42:49.041279 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:42:49.041753 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:42:49.042011 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:49.042225 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetState
	I0630 15:42:49.044289 1606244 fix.go:112] recreateIfNeeded on kubernetes-upgrade-691468: state=Running err=<nil>
	W0630 15:42:49.044315 1606244 fix.go:138] unexpected machine state, will restart: <nil>
	I0630 15:42:49.046520 1606244 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-691468" VM ...
	I0630 15:42:47.162760 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.163334 1605949 main.go:141] libmachine: (cert-options-329017) found domain IP: 192.168.39.244
	I0630 15:42:47.163368 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has current primary IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.163373 1605949 main.go:141] libmachine: (cert-options-329017) reserving static IP address...
	I0630 15:42:47.163896 1605949 main.go:141] libmachine: (cert-options-329017) DBG | unable to find host DHCP lease matching {name: "cert-options-329017", mac: "52:54:00:1c:2c:a0", ip: "192.168.39.244"} in network mk-cert-options-329017
	I0630 15:42:47.263288 1605949 main.go:141] libmachine: (cert-options-329017) reserved static IP address 192.168.39.244 for domain cert-options-329017
	I0630 15:42:47.263305 1605949 main.go:141] libmachine: (cert-options-329017) waiting for SSH...
	I0630 15:42:47.263328 1605949 main.go:141] libmachine: (cert-options-329017) DBG | Getting to WaitForSSH function...
	I0630 15:42:47.266844 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.267388 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.267415 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.267589 1605949 main.go:141] libmachine: (cert-options-329017) DBG | Using SSH client type: external
	I0630 15:42:47.267612 1605949 main.go:141] libmachine: (cert-options-329017) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa (-rw-------)
	I0630 15:42:47.267646 1605949 main.go:141] libmachine: (cert-options-329017) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:42:47.267673 1605949 main.go:141] libmachine: (cert-options-329017) DBG | About to run SSH command:
	I0630 15:42:47.267686 1605949 main.go:141] libmachine: (cert-options-329017) DBG | exit 0
	I0630 15:42:47.398023 1605949 main.go:141] libmachine: (cert-options-329017) DBG | SSH cmd err, output: <nil>: 
	I0630 15:42:47.398376 1605949 main.go:141] libmachine: (cert-options-329017) KVM machine creation complete
	I0630 15:42:47.398648 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetConfigRaw
	I0630 15:42:47.399305 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:47.399545 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:47.399739 1605949 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:42:47.399748 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetState
	I0630 15:42:47.400966 1605949 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:42:47.400989 1605949 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:42:47.401001 1605949 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:42:47.401006 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.403800 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.404289 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.404312 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.404487 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:47.404673 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.404823 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.404959 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:47.405104 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:47.405439 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:47.405447 1605949 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:42:47.516706 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:42:47.516722 1605949 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:42:47.516730 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.521243 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.521672 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.521717 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.521883 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:47.522121 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.522267 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.522428 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:47.522605 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:47.522811 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:47.522818 1605949 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:42:47.638938 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:42:47.639025 1605949 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:42:47.639033 1605949 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:42:47.639044 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetMachineName
	I0630 15:42:47.639421 1605949 buildroot.go:166] provisioning hostname "cert-options-329017"
	I0630 15:42:47.639446 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetMachineName
	I0630 15:42:47.639687 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.642647 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.643159 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.643182 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.643414 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:47.643604 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.643765 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.643855 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:47.644075 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:47.644295 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:47.644301 1605949 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-329017 && echo "cert-options-329017" | sudo tee /etc/hostname
	I0630 15:42:47.775171 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-329017
	
	I0630 15:42:47.775192 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.778607 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.779160 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.779185 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.779438 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:47.779635 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.779821 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.780037 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:47.780265 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:47.780574 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:47.780596 1605949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-329017' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-329017/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-329017' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:42:47.908361 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:42:47.908398 1605949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:42:47.908425 1605949 buildroot.go:174] setting up certificates
	I0630 15:42:47.908458 1605949 provision.go:84] configureAuth start
	I0630 15:42:47.908471 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetMachineName
	I0630 15:42:47.908765 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetIP
	I0630 15:42:47.911812 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.912197 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.912221 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.912362 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.914963 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.915320 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.915337 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.915482 1605949 provision.go:143] copyHostCerts
	I0630 15:42:47.915552 1605949 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:42:47.915568 1605949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:42:47.915635 1605949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:42:47.915730 1605949 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:42:47.915733 1605949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:42:47.915755 1605949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:42:47.915807 1605949 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:42:47.915811 1605949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:42:47.915830 1605949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:42:47.915870 1605949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.cert-options-329017 san=[127.0.0.1 192.168.39.244 cert-options-329017 localhost minikube]
	I0630 15:42:48.294961 1605949 provision.go:177] copyRemoteCerts
	I0630 15:42:48.295039 1605949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:42:48.295067 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:48.298138 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.298574 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.298587 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.298863 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:48.299055 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.299246 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:48.299357 1605949 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa Username:docker}
	I0630 15:42:48.391183 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:42:48.419565 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0630 15:42:48.448731 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:42:48.479872 1605949 provision.go:87] duration metric: took 571.39712ms to configureAuth
	I0630 15:42:48.479897 1605949 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:42:48.480223 1605949 config.go:182] Loaded profile config "cert-options-329017": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:42:48.480323 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:48.483905 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.484284 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.484309 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.484551 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:48.484778 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.484979 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.485156 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:48.485343 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:48.485587 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:48.485598 1605949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:42:48.739637 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:42:48.739655 1605949 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:42:48.739662 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetURL
	I0630 15:42:48.741182 1605949 main.go:141] libmachine: (cert-options-329017) DBG | using libvirt version 6000000
	I0630 15:42:48.743448 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.743803 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.743869 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.744009 1605949 main.go:141] libmachine: Docker is up and running!
	I0630 15:42:48.744017 1605949 main.go:141] libmachine: Reticulating splines...
	I0630 15:42:48.744026 1605949 client.go:171] duration metric: took 25.866047024s to LocalClient.Create
	I0630 15:42:48.744049 1605949 start.go:167] duration metric: took 25.866115532s to libmachine.API.Create "cert-options-329017"
	I0630 15:42:48.744064 1605949 start.go:293] postStartSetup for "cert-options-329017" (driver="kvm2")
	I0630 15:42:48.744113 1605949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:42:48.744145 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:48.744394 1605949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:42:48.744412 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:48.746522 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.746867 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.746897 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.747054 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:48.747255 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.747431 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:48.747562 1605949 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa Username:docker}
	I0630 15:42:48.842964 1605949 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:42:48.848298 1605949 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:42:48.848322 1605949 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:42:48.848457 1605949 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:42:48.848562 1605949 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:42:48.848668 1605949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:42:48.861474 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:42:48.890263 1605949 start.go:296] duration metric: took 146.182957ms for postStartSetup
	I0630 15:42:48.890308 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetConfigRaw
	I0630 15:42:48.890983 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetIP
	I0630 15:42:48.893512 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.893855 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.893876 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.894166 1605949 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/config.json ...
	I0630 15:42:48.894351 1605949 start.go:128] duration metric: took 26.038708542s to createHost
	I0630 15:42:48.894370 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:48.896942 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.897432 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.897456 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.897664 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:48.897876 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.898045 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.898136 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:48.898268 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:48.898468 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:48.898474 1605949 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:42:49.018952 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298168.968307324
	
	I0630 15:42:49.018970 1605949 fix.go:216] guest clock: 1751298168.968307324
	I0630 15:42:49.018981 1605949 fix.go:229] Guest: 2025-06-30 15:42:48.968307324 +0000 UTC Remote: 2025-06-30 15:42:48.894357387 +0000 UTC m=+33.446096434 (delta=73.949937ms)
	I0630 15:42:49.019012 1605949 fix.go:200] guest clock delta is within tolerance: 73.949937ms
	I0630 15:42:49.019018 1605949 start.go:83] releasing machines lock for "cert-options-329017", held for 26.163557784s
	I0630 15:42:49.019087 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:49.019476 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetIP
	I0630 15:42:49.023471 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.023868 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:49.023893 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.024089 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:49.024744 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:49.024967 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:49.025075 1605949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:42:49.025136 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:49.025194 1605949 ssh_runner.go:195] Run: cat /version.json
	I0630 15:42:49.025214 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:49.029137 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.029468 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.029569 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:49.029594 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.029814 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:49.029915 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:49.029937 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.030117 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:49.030248 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:49.030363 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:49.030576 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:49.030662 1605949 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa Username:docker}
	I0630 15:42:49.030705 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:49.030856 1605949 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa Username:docker}
	I0630 15:42:49.156364 1605949 ssh_runner.go:195] Run: systemctl --version
	I0630 15:42:49.164909 1605949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:42:49.347007 1605949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:42:49.355516 1605949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:42:49.355584 1605949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:42:49.381697 1605949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:42:49.381714 1605949 start.go:495] detecting cgroup driver to use...
	I0630 15:42:49.381798 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:42:49.406025 1605949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:42:49.423108 1605949 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:42:49.423193 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:42:49.441149 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:42:49.461486 1605949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:42:49.606446 1605949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:42:49.753749 1605949 docker.go:246] disabling docker service ...
	I0630 15:42:49.753817 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:42:49.772689 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:42:49.787603 1605949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:42:49.994593 1605949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:42:50.158582 1605949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:42:50.179726 1605949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:42:50.208926 1605949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:42:50.208978 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.225914 1605949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:42:50.225997 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.242063 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.260117 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.274149 1605949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:42:50.286660 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.300475 1605949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.321986 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.335867 1605949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:42:50.347922 1605949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:42:50.347978 1605949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:42:50.361879 1605949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:42:50.373609 1605949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:42:50.523896 1605949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:42:50.650651 1605949 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:42:50.650724 1605949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:42:50.656156 1605949 start.go:563] Will wait 60s for crictl version
	I0630 15:42:50.656228 1605949 ssh_runner.go:195] Run: which crictl
	I0630 15:42:50.660921 1605949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:42:50.716328 1605949 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:42:50.716409 1605949 ssh_runner.go:195] Run: crio --version
	I0630 15:42:50.749644 1605949 ssh_runner.go:195] Run: crio --version
	I0630 15:42:50.779444 1605949 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:42:49.048090 1606244 machine.go:93] provisionDockerMachine start ...
	I0630 15:42:49.048134 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:49.048413 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.051984 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.052754 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.052795 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.053031 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:49.053285 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.053665 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.053856 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:49.054220 1606244 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:49.054568 1606244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:42:49.054586 1606244 main.go:141] libmachine: About to run SSH command:
	hostname
	I0630 15:42:49.174505 1606244 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-691468
	
	I0630 15:42:49.174550 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:42:49.174835 1606244 buildroot.go:166] provisioning hostname "kubernetes-upgrade-691468"
	I0630 15:42:49.174865 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:42:49.175077 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.178509 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.179023 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.179061 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.179290 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:49.179540 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.179759 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.179975 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:49.180193 1606244 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:49.180509 1606244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:42:49.180533 1606244 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-691468 && echo "kubernetes-upgrade-691468" | sudo tee /etc/hostname
	I0630 15:42:49.320759 1606244 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-691468
	
	I0630 15:42:49.320823 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.324112 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.324593 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.324627 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.324874 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:49.325092 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.325326 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.325533 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:49.325777 1606244 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:49.326003 1606244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:42:49.326021 1606244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-691468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-691468/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-691468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:42:49.444213 1606244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:42:49.444251 1606244 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:42:49.444277 1606244 buildroot.go:174] setting up certificates
	I0630 15:42:49.444297 1606244 provision.go:84] configureAuth start
	I0630 15:42:49.444306 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:42:49.444622 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetIP
	I0630 15:42:49.448151 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.448606 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.448649 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.448981 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.451782 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.452224 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.452255 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.452437 1606244 provision.go:143] copyHostCerts
	I0630 15:42:49.452509 1606244 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:42:49.452534 1606244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:42:49.452598 1606244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:42:49.452710 1606244 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:42:49.452720 1606244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:42:49.452748 1606244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:42:49.452816 1606244 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:42:49.452825 1606244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:42:49.452845 1606244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:42:49.452923 1606244 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-691468 san=[127.0.0.1 192.168.50.75 kubernetes-upgrade-691468 localhost minikube]
	I0630 15:42:49.862253 1606244 provision.go:177] copyRemoteCerts
	I0630 15:42:49.862328 1606244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:42:49.862366 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.866009 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.866420 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.866454 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.866682 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:49.866882 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.867059 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:49.867179 1606244 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:42:49.960597 1606244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:42:49.992219 1606244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0630 15:42:50.024997 1606244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:42:50.055928 1606244 provision.go:87] duration metric: took 611.609586ms to configureAuth
	I0630 15:42:50.055967 1606244 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:42:50.056190 1606244 config.go:182] Loaded profile config "kubernetes-upgrade-691468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:42:50.056286 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:50.059572 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:50.060189 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:50.060236 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:50.060528 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:50.060842 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:50.061089 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:50.061287 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:50.061546 1606244 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:50.061849 1606244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:42:50.061884 1606244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W0630 15:42:49.532038 1605445 pod_ready.go:104] pod "coredns-674b8bbfcf-m5x9v" is not "Ready", error: <nil>
	I0630 15:42:50.533537 1605445 pod_ready.go:94] pod "coredns-674b8bbfcf-m5x9v" is "Ready"
	I0630 15:42:50.533575 1605445 pod_ready.go:86] duration metric: took 5.507569083s for pod "coredns-674b8bbfcf-m5x9v" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:50.536968 1605445 pod_ready.go:83] waiting for pod "etcd-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:42:52.542794 1605445 pod_ready.go:104] pod "etcd-pause-011818" is not "Ready", error: <nil>
	I0630 15:42:50.780829 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetIP
	I0630 15:42:50.784165 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:50.784603 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:50.784622 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:50.784860 1605949 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 15:42:50.789335 1605949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:42:50.804525 1605949 kubeadm.go:875] updating cluster {Name:cert-options-329017 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:cert
-options-329017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8555 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:42:50.804653 1605949 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:42:50.804697 1605949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:42:50.847869 1605949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 15:42:50.847950 1605949 ssh_runner.go:195] Run: which lz4
	I0630 15:42:50.852429 1605949 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:42:50.856847 1605949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:42:50.856883 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 15:42:52.353170 1605949 crio.go:462] duration metric: took 1.500783762s to copy over tarball
	I0630 15:42:52.353239 1605949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W0630 15:42:54.544231 1605445 pod_ready.go:104] pod "etcd-pause-011818" is not "Ready", error: <nil>
	I0630 15:42:56.671739 1605445 pod_ready.go:94] pod "etcd-pause-011818" is "Ready"
	I0630 15:42:56.671770 1605445 pod_ready.go:86] duration metric: took 6.134777413s for pod "etcd-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:56.688788 1605445 pod_ready.go:83] waiting for pod "kube-apiserver-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:56.695429 1605445 pod_ready.go:94] pod "kube-apiserver-pause-011818" is "Ready"
	I0630 15:42:56.695464 1605445 pod_ready.go:86] duration metric: took 6.640138ms for pod "kube-apiserver-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:56.697813 1605445 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:42:58.704645 1605445 pod_ready.go:104] pod "kube-controller-manager-pause-011818" is not "Ready", error: <nil>
	I0630 15:42:59.704068 1605445 pod_ready.go:94] pod "kube-controller-manager-pause-011818" is "Ready"
	I0630 15:42:59.704103 1605445 pod_ready.go:86] duration metric: took 3.006259809s for pod "kube-controller-manager-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.706681 1605445 pod_ready.go:83] waiting for pod "kube-proxy-mgmjs" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.711902 1605445 pod_ready.go:94] pod "kube-proxy-mgmjs" is "Ready"
	I0630 15:42:59.711940 1605445 pod_ready.go:86] duration metric: took 5.231945ms for pod "kube-proxy-mgmjs" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.714330 1605445 pod_ready.go:83] waiting for pod "kube-scheduler-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.741396 1605445 pod_ready.go:94] pod "kube-scheduler-pause-011818" is "Ready"
	I0630 15:42:59.741443 1605445 pod_ready.go:86] duration metric: took 27.083819ms for pod "kube-scheduler-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.741460 1605445 pod_ready.go:40] duration metric: took 14.719311167s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:42:59.795433 1605445 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:42:59.797218 1605445 out.go:177] * Done! kubectl is now configured to use "pause-011818" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.646603843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298180646571963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=071ea754-4f51-44e5-be83-7501011c7f35 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.647463888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4549c26-2a7b-4001-a54f-fc29d2199f81 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.647531767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4549c26-2a7b-4001-a54f-fc29d2199f81 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.647782526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298165062493744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00,PodSandboxId:cf1e7e22016691df82d15f8dd0698d32394f2146d939aed292f73aa83b9f59c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298164464249583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e,PodSandboxId:c819720ebd365df537d8e20d712b54bb0a62e27b1f066c3a28bd7d1e1aec40af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298159813354656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449,PodSandboxId:81604ac034f43edb67026580e3ef5592a062de54dbed5be464655fa8440fbc3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751298159820239105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89
1c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4,PodSandboxId:2fe8b460098200fcbefcbfc9a7e1654b7ae9fde46ce475c18144d3a90238e690,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751298159827069090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93fb579e130c9d6f006d
7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0,PodSandboxId:6062c62e33576f07cb19866b15aeacb6e68b80c285497a6708b6f8e87fad8366,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298159775230420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.
kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298147078320735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd09
2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d,PodSandboxId:27b5e281bc5fdb0574461ea4d7d6661aa8539127bc2c5b8cdcbf66c5a139bc6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298144831650331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7,PodSandboxId:b24c2cc9a1481fd65083fe2352ac7300899ea13cf9e33101acf5c090c62652e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298144686868249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5,PodSandboxId:3f2647b4601a9b9510ed44fd0d1d67060c4024063b8fb7f8da929e32480bda36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298144565348583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4,PodSandboxId:619df19f77a01bbf26aaf9ee208296766c861c6227cdd8ea9a81bb651bf5c38f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298144422118394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 93fb579e130c9d6f006d7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba,PodSandboxId:d19ab491ab5ee6c48e618b086c5256911e31c9d2db9726d3ca6440f4c00fc57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298144375310301,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 891c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4549c26-2a7b-4001-a54f-fc29d2199f81 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.701956946Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf8eb46f-491e-40f1-95fd-160ee3671171 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.702051937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf8eb46f-491e-40f1-95fd-160ee3671171 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.703628067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74eaa5e4-2180-45c1-874b-25422763398e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.704168284Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298180704130861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74eaa5e4-2180-45c1-874b-25422763398e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.705005961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdb941db-67fd-4af8-be3e-d8da8f8f4fae name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.705089377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdb941db-67fd-4af8-be3e-d8da8f8f4fae name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.705502205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298165062493744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00,PodSandboxId:cf1e7e22016691df82d15f8dd0698d32394f2146d939aed292f73aa83b9f59c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298164464249583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e,PodSandboxId:c819720ebd365df537d8e20d712b54bb0a62e27b1f066c3a28bd7d1e1aec40af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298159813354656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449,PodSandboxId:81604ac034f43edb67026580e3ef5592a062de54dbed5be464655fa8440fbc3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751298159820239105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89
1c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4,PodSandboxId:2fe8b460098200fcbefcbfc9a7e1654b7ae9fde46ce475c18144d3a90238e690,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751298159827069090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93fb579e130c9d6f006d
7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0,PodSandboxId:6062c62e33576f07cb19866b15aeacb6e68b80c285497a6708b6f8e87fad8366,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298159775230420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.
kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298147078320735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd09
2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d,PodSandboxId:27b5e281bc5fdb0574461ea4d7d6661aa8539127bc2c5b8cdcbf66c5a139bc6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298144831650331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7,PodSandboxId:b24c2cc9a1481fd65083fe2352ac7300899ea13cf9e33101acf5c090c62652e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298144686868249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5,PodSandboxId:3f2647b4601a9b9510ed44fd0d1d67060c4024063b8fb7f8da929e32480bda36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298144565348583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4,PodSandboxId:619df19f77a01bbf26aaf9ee208296766c861c6227cdd8ea9a81bb651bf5c38f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298144422118394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 93fb579e130c9d6f006d7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba,PodSandboxId:d19ab491ab5ee6c48e618b086c5256911e31c9d2db9726d3ca6440f4c00fc57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298144375310301,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 891c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdb941db-67fd-4af8-be3e-d8da8f8f4fae name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.772389053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=577ffab9-7a75-4181-b13d-3385709f6149 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.772586388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=577ffab9-7a75-4181-b13d-3385709f6149 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.776225116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7ff2800-5479-4aad-9b42-d8c877e38667 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.776864003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298180776798889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7ff2800-5479-4aad-9b42-d8c877e38667 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.777707164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fae4e052-a832-4496-80cf-e3d3e051b4cb name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.777785786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fae4e052-a832-4496-80cf-e3d3e051b4cb name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.778199956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298165062493744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00,PodSandboxId:cf1e7e22016691df82d15f8dd0698d32394f2146d939aed292f73aa83b9f59c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298164464249583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e,PodSandboxId:c819720ebd365df537d8e20d712b54bb0a62e27b1f066c3a28bd7d1e1aec40af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298159813354656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449,PodSandboxId:81604ac034f43edb67026580e3ef5592a062de54dbed5be464655fa8440fbc3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751298159820239105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89
1c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4,PodSandboxId:2fe8b460098200fcbefcbfc9a7e1654b7ae9fde46ce475c18144d3a90238e690,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751298159827069090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93fb579e130c9d6f006d
7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0,PodSandboxId:6062c62e33576f07cb19866b15aeacb6e68b80c285497a6708b6f8e87fad8366,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298159775230420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.
kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298147078320735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd09
2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d,PodSandboxId:27b5e281bc5fdb0574461ea4d7d6661aa8539127bc2c5b8cdcbf66c5a139bc6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298144831650331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7,PodSandboxId:b24c2cc9a1481fd65083fe2352ac7300899ea13cf9e33101acf5c090c62652e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298144686868249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5,PodSandboxId:3f2647b4601a9b9510ed44fd0d1d67060c4024063b8fb7f8da929e32480bda36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298144565348583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4,PodSandboxId:619df19f77a01bbf26aaf9ee208296766c861c6227cdd8ea9a81bb651bf5c38f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298144422118394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 93fb579e130c9d6f006d7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba,PodSandboxId:d19ab491ab5ee6c48e618b086c5256911e31c9d2db9726d3ca6440f4c00fc57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298144375310301,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 891c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fae4e052-a832-4496-80cf-e3d3e051b4cb name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.831318315Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c39d0505-f04f-426a-a3d9-9d87211a245c name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.831519478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c39d0505-f04f-426a-a3d9-9d87211a245c name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.832859980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2712c2e1-84e4-48da-baff-c01fab0e4dd0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.833801661Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298180833762535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2712c2e1-84e4-48da-baff-c01fab0e4dd0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.834911217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff2b7aa1-0894-4e77-a7d8-e108e297c5f6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.835048652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff2b7aa1-0894-4e77-a7d8-e108e297c5f6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:00 pause-011818 crio[3209]: time="2025-06-30 15:43:00.835519098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298165062493744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00,PodSandboxId:cf1e7e22016691df82d15f8dd0698d32394f2146d939aed292f73aa83b9f59c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298164464249583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e,PodSandboxId:c819720ebd365df537d8e20d712b54bb0a62e27b1f066c3a28bd7d1e1aec40af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298159813354656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449,PodSandboxId:81604ac034f43edb67026580e3ef5592a062de54dbed5be464655fa8440fbc3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751298159820239105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89
1c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4,PodSandboxId:2fe8b460098200fcbefcbfc9a7e1654b7ae9fde46ce475c18144d3a90238e690,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751298159827069090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93fb579e130c9d6f006d
7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0,PodSandboxId:6062c62e33576f07cb19866b15aeacb6e68b80c285497a6708b6f8e87fad8366,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298159775230420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.
kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298147078320735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd09
2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d,PodSandboxId:27b5e281bc5fdb0574461ea4d7d6661aa8539127bc2c5b8cdcbf66c5a139bc6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298144831650331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7,PodSandboxId:b24c2cc9a1481fd65083fe2352ac7300899ea13cf9e33101acf5c090c62652e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298144686868249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5,PodSandboxId:3f2647b4601a9b9510ed44fd0d1d67060c4024063b8fb7f8da929e32480bda36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298144565348583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4,PodSandboxId:619df19f77a01bbf26aaf9ee208296766c861c6227cdd8ea9a81bb651bf5c38f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298144422118394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 93fb579e130c9d6f006d7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba,PodSandboxId:d19ab491ab5ee6c48e618b086c5256911e31c9d2db9726d3ca6440f4c00fc57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298144375310301,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 891c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff2b7aa1-0894-4e77-a7d8-e108e297c5f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10c7dd782d613       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   15 seconds ago      Running             coredns                   2                   303940235d080       coredns-674b8bbfcf-m5x9v
	5e3cf46766f91       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19   16 seconds ago      Running             kube-proxy                2                   cf1e7e2201669       kube-proxy-mgmjs
	3238028e858f5       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b   21 seconds ago      Running             kube-scheduler            2                   2fe8b46009820       kube-scheduler-pause-011818
	09c748be02f81       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e   21 seconds ago      Running             kube-apiserver            2                   81604ac034f43       kube-apiserver-pause-011818
	c7cdc277a6a11       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2   21 seconds ago      Running             kube-controller-manager   2                   c819720ebd365       kube-controller-manager-pause-011818
	d66b32dd776d9       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   21 seconds ago      Running             etcd                      2                   6062c62e33576       etcd-pause-011818
	5b9dfb17fd2a2       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   33 seconds ago      Exited              coredns                   1                   303940235d080       coredns-674b8bbfcf-m5x9v
	7e163e9ac9670       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19   36 seconds ago      Exited              kube-proxy                1                   27b5e281bc5fd       kube-proxy-mgmjs
	308e6e5defd8c       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   36 seconds ago      Exited              etcd                      1                   b24c2cc9a1481       etcd-pause-011818
	94e00cb01194c       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2   36 seconds ago      Exited              kube-controller-manager   1                   3f2647b4601a9       kube-controller-manager-pause-011818
	3bd671174d97d       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b   36 seconds ago      Exited              kube-scheduler            1                   619df19f77a01       kube-scheduler-pause-011818
	77a634c0334d1       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e   36 seconds ago      Exited              kube-apiserver            1                   d19ab491ab5ee       kube-apiserver-pause-011818
	
	
	==> coredns [10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:36043 - 6235 "HINFO IN 479217239100511728.334149645577510968. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.037237828s
	
	
	==> coredns [5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47160 - 43593 "HINFO IN 6357363114233410437.8443103519251798570. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044394046s
	
	
	==> describe nodes <==
	Name:               pause-011818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-011818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=pause-011818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T15_41_16_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 15:41:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-011818
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 15:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 15:42:43 +0000   Mon, 30 Jun 2025 15:41:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 15:42:43 +0000   Mon, 30 Jun 2025 15:41:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 15:42:43 +0000   Mon, 30 Jun 2025 15:41:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 15:42:43 +0000   Mon, 30 Jun 2025 15:41:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.93
	  Hostname:    pause-011818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	System Info:
	  Machine ID:                 270c033ae6994c2ea575daba35bfc05b
	  System UUID:                270c033a-e699-4c2e-a575-daba35bfc05b
	  Boot ID:                    863ffb22-37fa-4962-acfc-7c93023ccee4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-674b8bbfcf-m5x9v                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     101s
	  kube-system                 etcd-pause-011818                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         106s
	  kube-system                 kube-apiserver-pause-011818             250m (12%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-pause-011818    200m (10%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-mgmjs                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-pause-011818             100m (5%)     0 (0%)      0 (0%)           0 (0%)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 99s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientPID     106s               kubelet          Node pause-011818 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  106s               kubelet          Node pause-011818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s               kubelet          Node pause-011818 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 106s               kubelet          Starting kubelet.
	  Normal  NodeReady                105s               kubelet          Node pause-011818 status is now: NodeReady
	  Normal  RegisteredNode           102s               node-controller  Node pause-011818 event: Registered Node pause-011818 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-011818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-011818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-011818 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-011818 event: Registered Node pause-011818 in Controller
	
	
	==> dmesg <==
	[Jun30 15:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001119] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002988] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.162940] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.091074] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 15:41] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.110929] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.136062] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.385295] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.760443] kauditd_printk_skb: 69 callbacks suppressed
	[Jun30 15:42] kauditd_printk_skb: 199 callbacks suppressed
	[  +5.531124] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.156833] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7] <==
	
	
	==> etcd [d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0] <==
	{"level":"info","ts":"2025-06-30T15:42:41.568743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa received MsgPreVoteResp from 4e6e2c9029caadaa at term 2"}
	{"level":"info","ts":"2025-06-30T15:42:41.568775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa became candidate at term 3"}
	{"level":"info","ts":"2025-06-30T15:42:41.568820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa received MsgVoteResp from 4e6e2c9029caadaa at term 3"}
	{"level":"info","ts":"2025-06-30T15:42:41.568840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa became leader at term 3"}
	{"level":"info","ts":"2025-06-30T15:42:41.568871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6e2c9029caadaa elected leader 4e6e2c9029caadaa at term 3"}
	{"level":"info","ts":"2025-06-30T15:42:41.578925Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"4e6e2c9029caadaa","local-member-attributes":"{Name:pause-011818 ClientURLs:[https://192.168.61.93:2379]}","request-path":"/0/members/4e6e2c9029caadaa/attributes","cluster-id":"4a4285095021b5a3","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T15:42:41.579134Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:42:41.579528Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:42:41.581978Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T15:42:41.582728Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T15:42:41.584994Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T15:42:41.588139Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.93:2379"}
	{"level":"info","ts":"2025-06-30T15:42:41.586511Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T15:42:41.608102Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T15:42:54.513777Z","caller":"traceutil/trace.go:171","msg":"trace[798175544] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"121.520399ms","start":"2025-06-30T15:42:54.392232Z","end":"2025-06-30T15:42:54.513753Z","steps":["trace[798175544] 'process raft request'  (duration: 59.551047ms)","trace[798175544] 'compare'  (duration: 61.832353ms)"],"step_count":2}
	{"level":"info","ts":"2025-06-30T15:42:56.032078Z","caller":"traceutil/trace.go:171","msg":"trace[1434113713] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"117.006114ms","start":"2025-06-30T15:42:55.915051Z","end":"2025-06-30T15:42:56.032057Z","steps":["trace[1434113713] 'process raft request'  (duration: 116.598514ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T15:42:56.227238Z","caller":"traceutil/trace.go:171","msg":"trace[253546296] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"180.955801ms","start":"2025-06-30T15:42:56.046262Z","end":"2025-06-30T15:42:56.227217Z","steps":["trace[253546296] 'process raft request'  (duration: 138.645195ms)","trace[253546296] 'compare'  (duration: 42.195884ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T15:42:56.660880Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.043969ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12513981371954611519 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" mod_revision:407 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" value_size:4574 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-06-30T15:42:56.660998Z","caller":"traceutil/trace.go:171","msg":"trace[1557464774] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"125.350306ms","start":"2025-06-30T15:42:56.535636Z","end":"2025-06-30T15:42:56.660986Z","steps":["trace[1557464774] 'read index received'  (duration: 27.183µs)","trace[1557464774] 'applied index is now lower than readState.Index'  (duration: 125.322048ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T15:42:56.661061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.4193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-011818\" limit:1 ","response":"range_response_count:1 size:5747"}
	{"level":"info","ts":"2025-06-30T15:42:56.661076Z","caller":"traceutil/trace.go:171","msg":"trace[2050555326] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-011818; range_end:; response_count:1; response_revision:484; }","duration":"125.457893ms","start":"2025-06-30T15:42:56.535613Z","end":"2025-06-30T15:42:56.661071Z","steps":["trace[2050555326] 'agreement among raft nodes before linearized reading'  (duration: 125.406859ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T15:42:56.661253Z","caller":"traceutil/trace.go:171","msg":"trace[1185542617] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"420.539593ms","start":"2025-06-30T15:42:56.240702Z","end":"2025-06-30T15:42:56.661241Z","steps":["trace[1185542617] 'process raft request'  (duration: 173.248259ms)","trace[1185542617] 'compare'  (duration: 245.949958ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T15:42:56.664936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T15:42:56.240684Z","time spent":"424.184362ms","remote":"127.0.0.1:34688","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4636,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" mod_revision:407 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" value_size:4574 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" > >"}
	{"level":"warn","ts":"2025-06-30T15:42:56.989526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.139434ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T15:42:56.989605Z","caller":"traceutil/trace.go:171","msg":"trace[1701074428] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:484; }","duration":"198.234599ms","start":"2025-06-30T15:42:56.791359Z","end":"2025-06-30T15:42:56.989594Z","steps":["trace[1701074428] 'range keys from in-memory index tree'  (duration: 198.111801ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:43:01 up 2 min,  0 users,  load average: 0.63, 0.36, 0.14
	Linux pause-011818 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449] <==
	I0630 15:42:43.302550       1 shared_informer.go:357] "Caches are synced" controller="configmaps"
	I0630 15:42:43.303178       1 shared_informer.go:357] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0630 15:42:43.303727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0630 15:42:43.305766       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0630 15:42:43.305893       1 shared_informer.go:357] "Caches are synced" controller="ipallocator-repair-controller"
	I0630 15:42:43.309504       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 15:42:43.312221       1 shared_informer.go:357] "Caches are synced" controller="crd-autoregister"
	I0630 15:42:43.312283       1 shared_informer.go:357] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0630 15:42:43.312328       1 default_servicecidr_controller.go:136] Shutting down kubernetes-service-cidr-controller
	I0630 15:42:43.312699       1 aggregator.go:171] initial CRD sync complete...
	I0630 15:42:43.312752       1 autoregister_controller.go:144] Starting autoregister controller
	I0630 15:42:43.312759       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0630 15:42:43.312764       1 cache.go:39] Caches are synced for autoregister controller
	E0630 15:42:43.315203       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0630 15:42:43.343756       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0630 15:42:43.369792       1 shared_informer.go:357] "Caches are synced" controller="node_authorizer"
	I0630 15:42:44.106933       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0630 15:42:44.531363       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0630 15:42:44.607867       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0630 15:42:44.656300       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0630 15:42:44.666588       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0630 15:42:46.736062       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0630 15:42:46.784082       1 controller.go:667] quota admission added evaluator for: endpoints
	I0630 15:42:46.843244       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 15:42:47.040040       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba] <==
	
	
	==> kube-controller-manager [94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5] <==
	
	
	==> kube-controller-manager [c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e] <==
	I0630 15:42:46.537277       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0630 15:42:46.537412       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-011818"
	I0630 15:42:46.537590       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0630 15:42:46.541844       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0630 15:42:46.541880       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0630 15:42:46.545801       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0630 15:42:46.548009       1 shared_informer.go:357] "Caches are synced" controller="TTL"
	I0630 15:42:46.655066       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 15:42:46.721872       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0630 15:42:46.721892       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0630 15:42:46.724211       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0630 15:42:46.724301       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0630 15:42:46.731166       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 15:42:46.734249       1 shared_informer.go:357] "Caches are synced" controller="PV protection"
	I0630 15:42:46.734463       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0630 15:42:46.792011       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 15:42:46.792259       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0630 15:42:46.795663       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0630 15:42:46.799296       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 15:42:46.842575       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 15:42:46.881508       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0630 15:42:47.263576       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 15:42:47.331305       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 15:42:47.331337       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 15:42:47.331345       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00] <==
	E0630 15:42:44.662077       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 15:42:44.678791       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.61.93"]
	E0630 15:42:44.678905       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 15:42:44.727337       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 15:42:44.727392       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 15:42:44.727454       1 server_linux.go:145] "Using iptables Proxier"
	I0630 15:42:44.744151       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 15:42:44.744527       1 server.go:516] "Version info" version="v1.33.2"
	I0630 15:42:44.744552       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 15:42:44.749642       1 config.go:199] "Starting service config controller"
	I0630 15:42:44.750351       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 15:42:44.750379       1 config.go:105] "Starting endpoint slice config controller"
	I0630 15:42:44.750383       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 15:42:44.750407       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 15:42:44.750446       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 15:42:44.750532       1 config.go:329] "Starting node config controller"
	I0630 15:42:44.750536       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 15:42:44.850700       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0630 15:42:44.850817       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 15:42:44.851353       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 15:42:44.851726       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d] <==
	
	
	==> kube-scheduler [3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4] <==
	W0630 15:42:43.130637       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 15:42:43.130702       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 15:42:43.237067       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 15:42:43.239506       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 15:42:43.244200       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 15:42:43.244583       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 15:42:43.244673       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 15:42:43.244709       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0630 15:42:43.262547       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 15:42:43.262876       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 15:42:43.263090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 15:42:43.263286       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 15:42:43.263406       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 15:42:43.263620       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0630 15:42:43.263832       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 15:42:43.263927       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 15:42:43.266801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 15:42:43.267178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 15:42:43.267509       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 15:42:43.267845       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 15:42:43.268099       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 15:42:43.268394       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 15:42:43.268633       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 15:42:43.269310       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0630 15:42:44.567014       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4] <==
	
	
	==> kubelet <==
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.266069    3993 status_manager.go:895] "Failed to get status for pod" podUID="fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62" pod="kube-system/coredns-674b8bbfcf-m5x9v" err="pods \"coredns-674b8bbfcf-m5x9v\" is forbidden: User \"system:node:pause-011818\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-011818' and this object"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.268614    3993 status_manager.go:895] "Failed to get status for pod" podUID="8b8ac108-b30d-4905-8502-8bfde43240da" pod="kube-system/kube-proxy-mgmjs" err="pods \"kube-proxy-mgmjs\" is forbidden: User \"system:node:pause-011818\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-011818' and this object"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.273269    3993 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.275677    3993 status_manager.go:895] "Failed to get status for pod" podUID="7e83b9769c7ab2096e0acb50384b7cb0" pod="kube-system/etcd-pause-011818" err="pods \"etcd-pause-011818\" is forbidden: User \"system:node:pause-011818\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-011818' and this object"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.281902    3993 status_manager.go:895] "Failed to get status for pod" podUID="891c29df416c90e174a5864263ac6202" pod="kube-system/kube-apiserver-pause-011818" err=<
	Jun 30 15:42:43 pause-011818 kubelet[3993]:         pods "kube-apiserver-pause-011818" is forbidden: User "system:node:pause-011818" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-011818' and this object
	Jun 30 15:42:43 pause-011818 kubelet[3993]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	Jun 30 15:42:43 pause-011818 kubelet[3993]:  >
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.336762    3993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8ac108-b30d-4905-8502-8bfde43240da-xtables-lock\") pod \"kube-proxy-mgmjs\" (UID: \"8b8ac108-b30d-4905-8502-8bfde43240da\") " pod="kube-system/kube-proxy-mgmjs"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.336970    3993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8ac108-b30d-4905-8502-8bfde43240da-lib-modules\") pod \"kube-proxy-mgmjs\" (UID: \"8b8ac108-b30d-4905-8502-8bfde43240da\") " pod="kube-system/kube-proxy-mgmjs"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.400312    3993 kubelet_node_status.go:124] "Node was previously registered" node="pause-011818"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.400552    3993 kubelet_node_status.go:78] "Successfully registered node" node="pause-011818"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.400619    3993 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.401850    3993 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.447311    3993 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-011818"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: E0630 15:42:43.461035    3993 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-011818\" already exists" pod="kube-system/kube-apiserver-pause-011818"
	Jun 30 15:42:44 pause-011818 kubelet[3993]: E0630 15:42:44.341595    3993 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jun 30 15:42:44 pause-011818 kubelet[3993]: E0630 15:42:44.341874    3993 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62-config-volume podName:fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62 nodeName:}" failed. No retries permitted until 2025-06-30 15:42:44.841843056 +0000 UTC m=+5.715308111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62-config-volume") pod "coredns-674b8bbfcf-m5x9v" (UID: "fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62") : failed to sync configmap cache: timed out waiting for the condition
	Jun 30 15:42:44 pause-011818 kubelet[3993]: I0630 15:42:44.451933    3993 scope.go:117] "RemoveContainer" containerID="7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d"
	Jun 30 15:42:45 pause-011818 kubelet[3993]: I0630 15:42:45.051128    3993 scope.go:117] "RemoveContainer" containerID="5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b"
	Jun 30 15:42:49 pause-011818 kubelet[3993]: E0630 15:42:49.397683    3993 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298169397058046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 15:42:49 pause-011818 kubelet[3993]: E0630 15:42:49.397732    3993 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298169397058046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 15:42:50 pause-011818 kubelet[3993]: I0630 15:42:50.202742    3993 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 30 15:42:59 pause-011818 kubelet[3993]: E0630 15:42:59.401174    3993 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298179400291191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 15:42:59 pause-011818 kubelet[3993]: E0630 15:42:59.401771    3993 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298179400291191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-011818 -n pause-011818
helpers_test.go:261: (dbg) Run:  kubectl --context pause-011818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-011818 -n pause-011818
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-011818 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-011818 logs -n 25: (1.842522675s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo cat                            | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo cat                            | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo cat                            | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo cat                            | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo                                | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo find                           | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-668101 sudo crio                           | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-668101                                     | cilium-668101             | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:40 UTC |
	| start   | -p cert-expiration-775975                            | cert-expiration-775975    | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:41 UTC |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-185417                          | force-systemd-env-185417  | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:40 UTC |
	| start   | -p force-systemd-flag-632862                         | force-systemd-flag-632862 | jenkins | v1.36.0 | 30 Jun 25 15:40 UTC | 30 Jun 25 15:42 UTC |
	|         | --memory=3072 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-691468                         | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:41 UTC | 30 Jun 25 15:41 UTC |
	| start   | -p kubernetes-upgrade-691468                         | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:41 UTC | 30 Jun 25 15:42 UTC |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-011818                                      | pause-011818              | jenkins | v1.36.0 | 30 Jun 25 15:41 UTC | 30 Jun 25 15:42 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-632862 ssh cat                    | force-systemd-flag-632862 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC | 30 Jun 25 15:42 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-632862                         | force-systemd-flag-632862 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC | 30 Jun 25 15:42 UTC |
	| start   | -p cert-options-329017                               | cert-options-329017       | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-691468                         | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-691468                         | kubernetes-upgrade-691468 | jenkins | v1.36.0 | 30 Jun 25 15:42 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 15:42:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 15:42:35.853113 1606244 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:42:35.853436 1606244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:42:35.853448 1606244 out.go:358] Setting ErrFile to fd 2...
	I0630 15:42:35.853456 1606244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:42:35.853724 1606244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:42:35.854405 1606244 out.go:352] Setting JSON to false
	I0630 15:42:35.855594 1606244 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33848,"bootTime":1751264308,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:42:35.855670 1606244 start.go:140] virtualization: kvm guest
	I0630 15:42:35.858215 1606244 out.go:177] * [kubernetes-upgrade-691468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:42:35.860209 1606244 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:42:35.860228 1606244 notify.go:220] Checking for updates...
	I0630 15:42:35.865119 1606244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:42:35.866542 1606244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:42:35.868138 1606244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:42:35.869605 1606244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:42:35.871126 1606244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:42:35.873306 1606244 config.go:182] Loaded profile config "kubernetes-upgrade-691468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:42:35.873808 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:42:35.873880 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:42:35.891821 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37003
	I0630 15:42:35.892431 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:42:35.893175 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:42:35.893221 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:42:35.893689 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:42:35.893943 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:35.894255 1606244 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:42:35.894983 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:42:35.895077 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:42:35.915714 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0630 15:42:35.916489 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:42:35.917168 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:42:35.917198 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:42:35.917599 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:42:35.917861 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:35.967770 1606244 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 15:42:35.969920 1606244 start.go:304] selected driver: kvm2
	I0630 15:42:35.969952 1606244 start.go:908] validating driver "kvm2" against &{Name:kubernetes-upgrade-691468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
33.2 ClusterName:kubernetes-upgrade-691468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:42:35.970102 1606244 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:42:35.970886 1606244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:42:35.970991 1606244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:42:35.990124 1606244 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:42:35.990586 1606244 cni.go:84] Creating CNI manager for ""
	I0630 15:42:35.990645 1606244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:42:35.990696 1606244 start.go:347] cluster config:
	{Name:kubernetes-upgrade-691468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:kubernetes-upgrade-691468 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:42:35.990882 1606244 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:42:35.993369 1606244 out.go:177] * Starting "kubernetes-upgrade-691468" primary control-plane node in "kubernetes-upgrade-691468" cluster
	I0630 15:42:37.395279 1605445 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b 7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d 308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7 94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5 3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4 77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba 194e6ed98f266425217bba7b5865190a0df019b532fe72bedbecea0ea6f2b9a0: (9.391509525s)
	I0630 15:42:37.395375 1605445 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0630 15:42:37.457294 1605445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:42:37.473364 1605445 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jun 30 15:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jun 30 15:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1969 Jun 30 15:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 30 15:41 /etc/kubernetes/scheduler.conf
	
	I0630 15:42:37.473492 1605445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:42:37.488502 1605445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:42:37.502707 1605445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:42:37.502782 1605445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:42:37.517688 1605445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:42:37.531124 1605445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:42:37.531212 1605445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:42:37.547909 1605445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:42:37.560330 1605445 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:42:37.560417 1605445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:42:37.573294 1605445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:42:37.584981 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:37.643100 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:36.008014 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:36.008818 1605949 main.go:141] libmachine: (cert-options-329017) DBG | unable to find current IP address of domain cert-options-329017 in network mk-cert-options-329017
	I0630 15:42:36.008838 1605949 main.go:141] libmachine: (cert-options-329017) DBG | I0630 15:42:36.008780 1606047 retry.go:31] will retry after 3.439221971s: waiting for domain to come up
	I0630 15:42:39.449354 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:39.450005 1605949 main.go:141] libmachine: (cert-options-329017) DBG | unable to find current IP address of domain cert-options-329017 in network mk-cert-options-329017
	I0630 15:42:39.450090 1605949 main.go:141] libmachine: (cert-options-329017) DBG | I0630 15:42:39.450001 1606047 retry.go:31] will retry after 3.302475314s: waiting for domain to come up
	I0630 15:42:35.995150 1606244 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:42:35.995350 1606244 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 15:42:35.995374 1606244 cache.go:56] Caching tarball of preloaded images
	I0630 15:42:35.995494 1606244 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:42:35.995512 1606244 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 15:42:35.995657 1606244 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kubernetes-upgrade-691468/config.json ...
	I0630 15:42:35.995915 1606244 start.go:360] acquireMachinesLock for kubernetes-upgrade-691468: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:42:38.853268 1605445 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.210112749s)
	I0630 15:42:38.853316 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:39.132560 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:39.204851 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:39.295067 1605445 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:42:39.295187 1605445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:42:39.796163 1605445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:42:40.295601 1605445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:42:40.316394 1605445 api_server.go:72] duration metric: took 1.021328363s to wait for apiserver process to appear ...
	I0630 15:42:40.316425 1605445 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:42:40.316444 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:43.197448 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0630 15:42:43.197484 1605445 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0630 15:42:43.197505 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:43.242367 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0630 15:42:43.242397 1605445 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0630 15:42:43.316608 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:43.322258 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0630 15:42:43.322290 1605445 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0630 15:42:43.817512 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:43.822747 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0630 15:42:43.822783 1605445 api_server.go:103] status: https://192.168.61.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0630 15:42:44.317444 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:44.324363 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0630 15:42:44.333892 1605445 api_server.go:141] control plane version: v1.33.2
	I0630 15:42:44.333936 1605445 api_server.go:131] duration metric: took 4.017501906s to wait for apiserver health ...
	I0630 15:42:44.333949 1605445 cni.go:84] Creating CNI manager for ""
	I0630 15:42:44.333959 1605445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:42:44.336020 1605445 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 15:42:44.337442 1605445 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 15:42:44.351732 1605445 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 15:42:44.376840 1605445 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:42:44.380586 1605445 system_pods.go:59] 6 kube-system pods found
	I0630 15:42:44.380636 1605445 system_pods.go:61] "coredns-674b8bbfcf-m5x9v" [fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:42:44.380649 1605445 system_pods.go:61] "etcd-pause-011818" [4bcbec68-c5f1-4075-aea0-9886466aac76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0630 15:42:44.380663 1605445 system_pods.go:61] "kube-apiserver-pause-011818" [f4aac6bd-3a23-4c64-8ce4-09e508687d26] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:42:44.380678 1605445 system_pods.go:61] "kube-controller-manager-pause-011818" [18b5d2ba-f0f1-4f82-9e0c-7df77d432a19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:42:44.380692 1605445 system_pods.go:61] "kube-proxy-mgmjs" [8b8ac108-b30d-4905-8502-8bfde43240da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0630 15:42:44.380704 1605445 system_pods.go:61] "kube-scheduler-pause-011818" [d89944c8-73f1-42c8-bc87-b8bd6dfbe11b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:42:44.380718 1605445 system_pods.go:74] duration metric: took 3.845351ms to wait for pod list to return data ...
	I0630 15:42:44.380754 1605445 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:42:44.383534 1605445 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:42:44.383569 1605445 node_conditions.go:123] node cpu capacity is 2
	I0630 15:42:44.383587 1605445 node_conditions.go:105] duration metric: took 2.823647ms to run NodePressure ...
	I0630 15:42:44.383611 1605445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:42:44.689248 1605445 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0630 15:42:44.694705 1605445 kubeadm.go:735] kubelet initialised
	I0630 15:42:44.694731 1605445 kubeadm.go:736] duration metric: took 5.449976ms waiting for restarted kubelet to initialise ...
	I0630 15:42:44.694749 1605445 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:42:44.710526 1605445 ops.go:34] apiserver oom_adj: -16
	I0630 15:42:44.710565 1605445 kubeadm.go:593] duration metric: took 16.796281707s to restartPrimaryControlPlane
	I0630 15:42:44.710579 1605445 kubeadm.go:394] duration metric: took 16.906670103s to StartCluster
	I0630 15:42:44.710607 1605445 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:44.710722 1605445 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:42:44.712076 1605445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:44.712469 1605445 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:42:44.712581 1605445 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:42:44.712733 1605445 config.go:182] Loaded profile config "pause-011818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:42:44.714734 1605445 out.go:177] * Verifying Kubernetes components...
	I0630 15:42:44.714734 1605445 out.go:177] * Enabled addons: 
	I0630 15:42:42.755130 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:42.755544 1605949 main.go:141] libmachine: (cert-options-329017) DBG | unable to find current IP address of domain cert-options-329017 in network mk-cert-options-329017
	I0630 15:42:42.755588 1605949 main.go:141] libmachine: (cert-options-329017) DBG | I0630 15:42:42.755503 1606047 retry.go:31] will retry after 4.405786509s: waiting for domain to come up
	I0630 15:42:44.715932 1605445 addons.go:514] duration metric: took 3.366717ms for enable addons: enabled=[]
	I0630 15:42:44.715957 1605445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:42:44.928469 1605445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:42:44.953138 1605445 node_ready.go:35] waiting up to 6m0s for node "pause-011818" to be "Ready" ...
	I0630 15:42:44.957122 1605445 node_ready.go:49] node "pause-011818" is "Ready"
	I0630 15:42:44.957161 1605445 node_ready.go:38] duration metric: took 3.965984ms for node "pause-011818" to be "Ready" ...
	I0630 15:42:44.957176 1605445 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:42:44.957236 1605445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:42:44.975894 1605445 api_server.go:72] duration metric: took 263.375426ms to wait for apiserver process to appear ...
	I0630 15:42:44.975922 1605445 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:42:44.975950 1605445 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0630 15:42:44.981961 1605445 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0630 15:42:44.983347 1605445 api_server.go:141] control plane version: v1.33.2
	I0630 15:42:44.983392 1605445 api_server.go:131] duration metric: took 7.451657ms to wait for apiserver health ...
	I0630 15:42:44.983406 1605445 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:42:44.989072 1605445 system_pods.go:59] 6 kube-system pods found
	I0630 15:42:44.989103 1605445 system_pods.go:61] "coredns-674b8bbfcf-m5x9v" [fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:42:44.989111 1605445 system_pods.go:61] "etcd-pause-011818" [4bcbec68-c5f1-4075-aea0-9886466aac76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0630 15:42:44.989118 1605445 system_pods.go:61] "kube-apiserver-pause-011818" [f4aac6bd-3a23-4c64-8ce4-09e508687d26] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:42:44.989125 1605445 system_pods.go:61] "kube-controller-manager-pause-011818" [18b5d2ba-f0f1-4f82-9e0c-7df77d432a19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:42:44.989130 1605445 system_pods.go:61] "kube-proxy-mgmjs" [8b8ac108-b30d-4905-8502-8bfde43240da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0630 15:42:44.989136 1605445 system_pods.go:61] "kube-scheduler-pause-011818" [d89944c8-73f1-42c8-bc87-b8bd6dfbe11b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:42:44.989152 1605445 system_pods.go:74] duration metric: took 5.738845ms to wait for pod list to return data ...
	I0630 15:42:44.989164 1605445 default_sa.go:34] waiting for default service account to be created ...
	I0630 15:42:44.991965 1605445 default_sa.go:45] found service account: "default"
	I0630 15:42:44.991998 1605445 default_sa.go:55] duration metric: took 2.825661ms for default service account to be created ...
	I0630 15:42:44.992013 1605445 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 15:42:44.995112 1605445 system_pods.go:86] 6 kube-system pods found
	I0630 15:42:44.995147 1605445 system_pods.go:89] "coredns-674b8bbfcf-m5x9v" [fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:42:44.995156 1605445 system_pods.go:89] "etcd-pause-011818" [4bcbec68-c5f1-4075-aea0-9886466aac76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0630 15:42:44.995164 1605445 system_pods.go:89] "kube-apiserver-pause-011818" [f4aac6bd-3a23-4c64-8ce4-09e508687d26] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:42:44.995170 1605445 system_pods.go:89] "kube-controller-manager-pause-011818" [18b5d2ba-f0f1-4f82-9e0c-7df77d432a19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:42:44.995175 1605445 system_pods.go:89] "kube-proxy-mgmjs" [8b8ac108-b30d-4905-8502-8bfde43240da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0630 15:42:44.995182 1605445 system_pods.go:89] "kube-scheduler-pause-011818" [d89944c8-73f1-42c8-bc87-b8bd6dfbe11b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:42:44.995191 1605445 system_pods.go:126] duration metric: took 3.170036ms to wait for k8s-apps to be running ...
	I0630 15:42:44.995198 1605445 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 15:42:44.995247 1605445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:42:45.013136 1605445 system_svc.go:56] duration metric: took 17.92073ms WaitForService to wait for kubelet
	I0630 15:42:45.013180 1605445 kubeadm.go:578] duration metric: took 300.667792ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:42:45.013210 1605445 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:42:45.016262 1605445 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:42:45.016298 1605445 node_conditions.go:123] node cpu capacity is 2
	I0630 15:42:45.016314 1605445 node_conditions.go:105] duration metric: took 3.097787ms to run NodePressure ...
	I0630 15:42:45.016333 1605445 start.go:241] waiting for startup goroutines ...
	I0630 15:42:45.016344 1605445 start.go:246] waiting for cluster config update ...
	I0630 15:42:45.016356 1605445 start.go:255] writing updated cluster config ...
	I0630 15:42:45.016738 1605445 ssh_runner.go:195] Run: rm -f paused
	I0630 15:42:45.022095 1605445 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:42:45.022815 1605445 kapi.go:59] client config for pause-011818: &rest.Config{Host:"https://192.168.61.93:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/client.crt", KeyFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/pause-011818/client.key", CAFile:"/home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x258ff00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0630 15:42:45.025968 1605445 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-m5x9v" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:42:47.034205 1605445 pod_ready.go:104] pod "coredns-674b8bbfcf-m5x9v" is not "Ready", error: <nil>
	I0630 15:42:49.019206 1606244 start.go:364] duration metric: took 13.023203423s to acquireMachinesLock for "kubernetes-upgrade-691468"
	I0630 15:42:49.019281 1606244 start.go:96] Skipping create...Using existing machine configuration
	I0630 15:42:49.019290 1606244 fix.go:54] fixHost starting: 
	I0630 15:42:49.019723 1606244 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:42:49.019786 1606244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:42:49.040153 1606244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0630 15:42:49.040712 1606244 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:42:49.041240 1606244 main.go:141] libmachine: Using API Version  1
	I0630 15:42:49.041279 1606244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:42:49.041753 1606244 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:42:49.042011 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:49.042225 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetState
	I0630 15:42:49.044289 1606244 fix.go:112] recreateIfNeeded on kubernetes-upgrade-691468: state=Running err=<nil>
	W0630 15:42:49.044315 1606244 fix.go:138] unexpected machine state, will restart: <nil>
	I0630 15:42:49.046520 1606244 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-691468" VM ...
	I0630 15:42:47.162760 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.163334 1605949 main.go:141] libmachine: (cert-options-329017) found domain IP: 192.168.39.244
	I0630 15:42:47.163368 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has current primary IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.163373 1605949 main.go:141] libmachine: (cert-options-329017) reserving static IP address...
	I0630 15:42:47.163896 1605949 main.go:141] libmachine: (cert-options-329017) DBG | unable to find host DHCP lease matching {name: "cert-options-329017", mac: "52:54:00:1c:2c:a0", ip: "192.168.39.244"} in network mk-cert-options-329017
	I0630 15:42:47.263288 1605949 main.go:141] libmachine: (cert-options-329017) reserved static IP address 192.168.39.244 for domain cert-options-329017
	I0630 15:42:47.263305 1605949 main.go:141] libmachine: (cert-options-329017) waiting for SSH...
	I0630 15:42:47.263328 1605949 main.go:141] libmachine: (cert-options-329017) DBG | Getting to WaitForSSH function...
	I0630 15:42:47.266844 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.267388 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.267415 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.267589 1605949 main.go:141] libmachine: (cert-options-329017) DBG | Using SSH client type: external
	I0630 15:42:47.267612 1605949 main.go:141] libmachine: (cert-options-329017) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa (-rw-------)
	I0630 15:42:47.267646 1605949 main.go:141] libmachine: (cert-options-329017) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:42:47.267673 1605949 main.go:141] libmachine: (cert-options-329017) DBG | About to run SSH command:
	I0630 15:42:47.267686 1605949 main.go:141] libmachine: (cert-options-329017) DBG | exit 0
	I0630 15:42:47.398023 1605949 main.go:141] libmachine: (cert-options-329017) DBG | SSH cmd err, output: <nil>: 
	I0630 15:42:47.398376 1605949 main.go:141] libmachine: (cert-options-329017) KVM machine creation complete
	I0630 15:42:47.398648 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetConfigRaw
	I0630 15:42:47.399305 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:47.399545 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:47.399739 1605949 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:42:47.399748 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetState
	I0630 15:42:47.400966 1605949 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:42:47.400989 1605949 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:42:47.401001 1605949 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:42:47.401006 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.403800 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.404289 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.404312 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.404487 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:47.404673 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.404823 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.404959 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:47.405104 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:47.405439 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:47.405447 1605949 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:42:47.516706 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:42:47.516722 1605949 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:42:47.516730 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.521243 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.521672 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.521717 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.521883 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:47.522121 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.522267 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.522428 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:47.522605 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:47.522811 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:47.522818 1605949 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:42:47.638938 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:42:47.639025 1605949 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:42:47.639033 1605949 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:42:47.639044 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetMachineName
	I0630 15:42:47.639421 1605949 buildroot.go:166] provisioning hostname "cert-options-329017"
	I0630 15:42:47.639446 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetMachineName
	I0630 15:42:47.639687 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.642647 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.643159 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.643182 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.643414 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:47.643604 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.643765 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.643855 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:47.644075 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:47.644295 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:47.644301 1605949 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-329017 && echo "cert-options-329017" | sudo tee /etc/hostname
	I0630 15:42:47.775171 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-329017
	
	I0630 15:42:47.775192 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.778607 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.779160 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.779185 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.779438 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:47.779635 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.779821 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:47.780037 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:47.780265 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:47.780574 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:47.780596 1605949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-329017' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-329017/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-329017' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:42:47.908361 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:42:47.908398 1605949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:42:47.908425 1605949 buildroot.go:174] setting up certificates
	I0630 15:42:47.908458 1605949 provision.go:84] configureAuth start
	I0630 15:42:47.908471 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetMachineName
	I0630 15:42:47.908765 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetIP
	I0630 15:42:47.911812 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.912197 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.912221 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.912362 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:47.914963 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.915320 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:47.915337 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:47.915482 1605949 provision.go:143] copyHostCerts
	I0630 15:42:47.915552 1605949 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:42:47.915568 1605949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:42:47.915635 1605949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:42:47.915730 1605949 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:42:47.915733 1605949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:42:47.915755 1605949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:42:47.915807 1605949 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:42:47.915811 1605949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:42:47.915830 1605949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:42:47.915870 1605949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.cert-options-329017 san=[127.0.0.1 192.168.39.244 cert-options-329017 localhost minikube]
	I0630 15:42:48.294961 1605949 provision.go:177] copyRemoteCerts
	I0630 15:42:48.295039 1605949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:42:48.295067 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:48.298138 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.298574 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.298587 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.298863 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:48.299055 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.299246 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:48.299357 1605949 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa Username:docker}
	I0630 15:42:48.391183 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:42:48.419565 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0630 15:42:48.448731 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:42:48.479872 1605949 provision.go:87] duration metric: took 571.39712ms to configureAuth
	I0630 15:42:48.479897 1605949 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:42:48.480223 1605949 config.go:182] Loaded profile config "cert-options-329017": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:42:48.480323 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:48.483905 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.484284 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.484309 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.484551 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:48.484778 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.484979 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.485156 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:48.485343 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:48.485587 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:48.485598 1605949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:42:48.739637 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:42:48.739655 1605949 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:42:48.739662 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetURL
	I0630 15:42:48.741182 1605949 main.go:141] libmachine: (cert-options-329017) DBG | using libvirt version 6000000
	I0630 15:42:48.743448 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.743803 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.743869 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.744009 1605949 main.go:141] libmachine: Docker is up and running!
	I0630 15:42:48.744017 1605949 main.go:141] libmachine: Reticulating splines...
	I0630 15:42:48.744026 1605949 client.go:171] duration metric: took 25.866047024s to LocalClient.Create
	I0630 15:42:48.744049 1605949 start.go:167] duration metric: took 25.866115532s to libmachine.API.Create "cert-options-329017"
	I0630 15:42:48.744064 1605949 start.go:293] postStartSetup for "cert-options-329017" (driver="kvm2")
	I0630 15:42:48.744113 1605949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:42:48.744145 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:48.744394 1605949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:42:48.744412 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:48.746522 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.746867 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.746897 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.747054 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:48.747255 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.747431 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:48.747562 1605949 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa Username:docker}
	I0630 15:42:48.842964 1605949 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:42:48.848298 1605949 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:42:48.848322 1605949 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:42:48.848457 1605949 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:42:48.848562 1605949 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:42:48.848668 1605949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:42:48.861474 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:42:48.890263 1605949 start.go:296] duration metric: took 146.182957ms for postStartSetup
	I0630 15:42:48.890308 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetConfigRaw
	I0630 15:42:48.890983 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetIP
	I0630 15:42:48.893512 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.893855 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.893876 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.894166 1605949 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/config.json ...
	I0630 15:42:48.894351 1605949 start.go:128] duration metric: took 26.038708542s to createHost
	I0630 15:42:48.894370 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:48.896942 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.897432 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:48.897456 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:48.897664 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:48.897876 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.898045 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:48.898136 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:48.898268 1605949 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:48.898468 1605949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0630 15:42:48.898474 1605949 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:42:49.018952 1605949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298168.968307324
	
	I0630 15:42:49.018970 1605949 fix.go:216] guest clock: 1751298168.968307324
	I0630 15:42:49.018981 1605949 fix.go:229] Guest: 2025-06-30 15:42:48.968307324 +0000 UTC Remote: 2025-06-30 15:42:48.894357387 +0000 UTC m=+33.446096434 (delta=73.949937ms)
	I0630 15:42:49.019012 1605949 fix.go:200] guest clock delta is within tolerance: 73.949937ms
	I0630 15:42:49.019018 1605949 start.go:83] releasing machines lock for "cert-options-329017", held for 26.163557784s
	I0630 15:42:49.019087 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:49.019476 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetIP
	I0630 15:42:49.023471 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.023868 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:49.023893 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.024089 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:49.024744 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:49.024967 1605949 main.go:141] libmachine: (cert-options-329017) Calling .DriverName
	I0630 15:42:49.025075 1605949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:42:49.025136 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:49.025194 1605949 ssh_runner.go:195] Run: cat /version.json
	I0630 15:42:49.025214 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHHostname
	I0630 15:42:49.029137 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.029468 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.029569 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:49.029594 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.029814 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:49.029915 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:49.029937 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:49.030117 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:49.030248 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHPort
	I0630 15:42:49.030363 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:49.030576 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHKeyPath
	I0630 15:42:49.030662 1605949 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa Username:docker}
	I0630 15:42:49.030705 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetSSHUsername
	I0630 15:42:49.030856 1605949 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/cert-options-329017/id_rsa Username:docker}
	I0630 15:42:49.156364 1605949 ssh_runner.go:195] Run: systemctl --version
	I0630 15:42:49.164909 1605949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:42:49.347007 1605949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:42:49.355516 1605949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:42:49.355584 1605949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:42:49.381697 1605949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:42:49.381714 1605949 start.go:495] detecting cgroup driver to use...
	I0630 15:42:49.381798 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:42:49.406025 1605949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:42:49.423108 1605949 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:42:49.423193 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:42:49.441149 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:42:49.461486 1605949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:42:49.606446 1605949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:42:49.753749 1605949 docker.go:246] disabling docker service ...
	I0630 15:42:49.753817 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:42:49.772689 1605949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:42:49.787603 1605949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:42:49.994593 1605949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:42:50.158582 1605949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:42:50.179726 1605949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:42:50.208926 1605949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:42:50.208978 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.225914 1605949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:42:50.225997 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.242063 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.260117 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.274149 1605949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:42:50.286660 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.300475 1605949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.321986 1605949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:50.335867 1605949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:42:50.347922 1605949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:42:50.347978 1605949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:42:50.361879 1605949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:42:50.373609 1605949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:42:50.523896 1605949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:42:50.650651 1605949 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:42:50.650724 1605949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:42:50.656156 1605949 start.go:563] Will wait 60s for crictl version
	I0630 15:42:50.656228 1605949 ssh_runner.go:195] Run: which crictl
	I0630 15:42:50.660921 1605949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:42:50.716328 1605949 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:42:50.716409 1605949 ssh_runner.go:195] Run: crio --version
	I0630 15:42:50.749644 1605949 ssh_runner.go:195] Run: crio --version
	I0630 15:42:50.779444 1605949 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:42:49.048090 1606244 machine.go:93] provisionDockerMachine start ...
	I0630 15:42:49.048134 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:49.048413 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.051984 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.052754 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.052795 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.053031 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:49.053285 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.053665 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.053856 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:49.054220 1606244 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:49.054568 1606244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:42:49.054586 1606244 main.go:141] libmachine: About to run SSH command:
	hostname
	I0630 15:42:49.174505 1606244 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-691468
	
	I0630 15:42:49.174550 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:42:49.174835 1606244 buildroot.go:166] provisioning hostname "kubernetes-upgrade-691468"
	I0630 15:42:49.174865 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:42:49.175077 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.178509 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.179023 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.179061 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.179290 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:49.179540 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.179759 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.179975 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:49.180193 1606244 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:49.180509 1606244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:42:49.180533 1606244 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-691468 && echo "kubernetes-upgrade-691468" | sudo tee /etc/hostname
	I0630 15:42:49.320759 1606244 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-691468
	
	I0630 15:42:49.320823 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.324112 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.324593 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.324627 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.324874 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:49.325092 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.325326 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.325533 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:49.325777 1606244 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:49.326003 1606244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:42:49.326021 1606244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-691468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-691468/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-691468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:42:49.444213 1606244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:42:49.444251 1606244 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:42:49.444277 1606244 buildroot.go:174] setting up certificates
	I0630 15:42:49.444297 1606244 provision.go:84] configureAuth start
	I0630 15:42:49.444306 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetMachineName
	I0630 15:42:49.444622 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetIP
	I0630 15:42:49.448151 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.448606 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.448649 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.448981 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.451782 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.452224 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.452255 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.452437 1606244 provision.go:143] copyHostCerts
	I0630 15:42:49.452509 1606244 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:42:49.452534 1606244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:42:49.452598 1606244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:42:49.452710 1606244 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:42:49.452720 1606244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:42:49.452748 1606244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:42:49.452816 1606244 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:42:49.452825 1606244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:42:49.452845 1606244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:42:49.452923 1606244 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-691468 san=[127.0.0.1 192.168.50.75 kubernetes-upgrade-691468 localhost minikube]
	I0630 15:42:49.862253 1606244 provision.go:177] copyRemoteCerts
	I0630 15:42:49.862328 1606244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:42:49.862366 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:49.866009 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.866420 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:49.866454 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:49.866682 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:49.866882 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:49.867059 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:49.867179 1606244 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:42:49.960597 1606244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:42:49.992219 1606244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0630 15:42:50.024997 1606244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:42:50.055928 1606244 provision.go:87] duration metric: took 611.609586ms to configureAuth
	I0630 15:42:50.055967 1606244 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:42:50.056190 1606244 config.go:182] Loaded profile config "kubernetes-upgrade-691468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:42:50.056286 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:50.059572 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:50.060189 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:50.060236 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:50.060528 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:50.060842 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:50.061089 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:50.061287 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:50.061546 1606244 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:50.061849 1606244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:42:50.061884 1606244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W0630 15:42:49.532038 1605445 pod_ready.go:104] pod "coredns-674b8bbfcf-m5x9v" is not "Ready", error: <nil>
	I0630 15:42:50.533537 1605445 pod_ready.go:94] pod "coredns-674b8bbfcf-m5x9v" is "Ready"
	I0630 15:42:50.533575 1605445 pod_ready.go:86] duration metric: took 5.507569083s for pod "coredns-674b8bbfcf-m5x9v" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:50.536968 1605445 pod_ready.go:83] waiting for pod "etcd-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:42:52.542794 1605445 pod_ready.go:104] pod "etcd-pause-011818" is not "Ready", error: <nil>
	I0630 15:42:50.780829 1605949 main.go:141] libmachine: (cert-options-329017) Calling .GetIP
	I0630 15:42:50.784165 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:50.784603 1605949 main.go:141] libmachine: (cert-options-329017) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:a0", ip: ""} in network mk-cert-options-329017: {Iface:virbr4 ExpiryTime:2025-06-30 16:42:39 +0000 UTC Type:0 Mac:52:54:00:1c:2c:a0 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:cert-options-329017 Clientid:01:52:54:00:1c:2c:a0}
	I0630 15:42:50.784622 1605949 main.go:141] libmachine: (cert-options-329017) DBG | domain cert-options-329017 has defined IP address 192.168.39.244 and MAC address 52:54:00:1c:2c:a0 in network mk-cert-options-329017
	I0630 15:42:50.784860 1605949 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 15:42:50.789335 1605949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:42:50.804525 1605949 kubeadm.go:875] updating cluster {Name:cert-options-329017 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:cert
-options-329017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8555 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:42:50.804653 1605949 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:42:50.804697 1605949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:42:50.847869 1605949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 15:42:50.847950 1605949 ssh_runner.go:195] Run: which lz4
	I0630 15:42:50.852429 1605949 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:42:50.856847 1605949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:42:50.856883 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 15:42:52.353170 1605949 crio.go:462] duration metric: took 1.500783762s to copy over tarball
	I0630 15:42:52.353239 1605949 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W0630 15:42:54.544231 1605445 pod_ready.go:104] pod "etcd-pause-011818" is not "Ready", error: <nil>
	I0630 15:42:56.671739 1605445 pod_ready.go:94] pod "etcd-pause-011818" is "Ready"
	I0630 15:42:56.671770 1605445 pod_ready.go:86] duration metric: took 6.134777413s for pod "etcd-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:56.688788 1605445 pod_ready.go:83] waiting for pod "kube-apiserver-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:56.695429 1605445 pod_ready.go:94] pod "kube-apiserver-pause-011818" is "Ready"
	I0630 15:42:56.695464 1605445 pod_ready.go:86] duration metric: took 6.640138ms for pod "kube-apiserver-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:56.697813 1605445 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:42:58.704645 1605445 pod_ready.go:104] pod "kube-controller-manager-pause-011818" is not "Ready", error: <nil>
	I0630 15:42:59.704068 1605445 pod_ready.go:94] pod "kube-controller-manager-pause-011818" is "Ready"
	I0630 15:42:59.704103 1605445 pod_ready.go:86] duration metric: took 3.006259809s for pod "kube-controller-manager-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.706681 1605445 pod_ready.go:83] waiting for pod "kube-proxy-mgmjs" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.711902 1605445 pod_ready.go:94] pod "kube-proxy-mgmjs" is "Ready"
	I0630 15:42:59.711940 1605445 pod_ready.go:86] duration metric: took 5.231945ms for pod "kube-proxy-mgmjs" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.714330 1605445 pod_ready.go:83] waiting for pod "kube-scheduler-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.741396 1605445 pod_ready.go:94] pod "kube-scheduler-pause-011818" is "Ready"
	I0630 15:42:59.741443 1605445 pod_ready.go:86] duration metric: took 27.083819ms for pod "kube-scheduler-pause-011818" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:42:59.741460 1605445 pod_ready.go:40] duration metric: took 14.719311167s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:42:59.795433 1605445 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:42:59.797218 1605445 out.go:177] * Done! kubectl is now configured to use "pause-011818" cluster and "default" namespace by default
	I0630 15:42:56.169593 1605949 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.816322876s)
	I0630 15:42:56.169616 1605949 crio.go:469] duration metric: took 3.816423861s to extract the tarball
	I0630 15:42:56.169625 1605949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:42:56.211569 1605949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:42:56.267365 1605949 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:42:56.267379 1605949 cache_images.go:84] Images are preloaded, skipping loading
	I0630 15:42:56.267387 1605949 kubeadm.go:926] updating node { 192.168.39.244 8555 v1.33.2 crio true true} ...
	I0630 15:42:56.267534 1605949 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-329017 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:cert-options-329017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 15:42:56.267628 1605949 ssh_runner.go:195] Run: crio config
	I0630 15:42:56.314295 1605949 cni.go:84] Creating CNI manager for ""
	I0630 15:42:56.314307 1605949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:42:56.314316 1605949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:42:56.314344 1605949 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8555 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-329017 NodeName:cert-options-329017 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:42:56.314475 1605949 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-329017"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.244"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:42:56.314540 1605949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 15:42:56.327039 1605949 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:42:56.327114 1605949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:42:56.339231 1605949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0630 15:42:56.362952 1605949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:42:56.382935 1605949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I0630 15:42:56.404493 1605949 ssh_runner.go:195] Run: grep 192.168.39.244	control-plane.minikube.internal$ /etc/hosts
	I0630 15:42:56.408856 1605949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:42:56.425540 1605949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:42:56.576087 1605949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:42:56.606247 1605949 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017 for IP: 192.168.39.244
	I0630 15:42:56.606264 1605949 certs.go:194] generating shared ca certs ...
	I0630 15:42:56.606286 1605949 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:56.606536 1605949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:42:56.606596 1605949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:42:56.606605 1605949 certs.go:256] generating profile certs ...
	I0630 15:42:56.606770 1605949 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/client.key
	I0630 15:42:56.606801 1605949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/client.crt with IP's: []
	I0630 15:42:56.665386 1605949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/client.crt ...
	I0630 15:42:56.665443 1605949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/client.crt: {Name:mkf66fb7ad31ae6868d5bcb9dead86ad9c4ae0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:56.665699 1605949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/client.key ...
	I0630 15:42:56.665716 1605949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/client.key: {Name:mk7d6ea7495af065a4ea7fc77441b431d4fd2e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:56.665842 1605949 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.key.2d099ecc
	I0630 15:42:56.665862 1605949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.crt.2d099ecc with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244]
	I0630 15:42:57.020385 1605949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.crt.2d099ecc ...
	I0630 15:42:57.020410 1605949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.crt.2d099ecc: {Name:mkc27e3989545cb43351437bf235079e3a5a4f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:57.020605 1605949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.key.2d099ecc ...
	I0630 15:42:57.020619 1605949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.key.2d099ecc: {Name:mk00b90610c65ea451cb93c880b9622159806a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:57.020714 1605949 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.crt.2d099ecc -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.crt
	I0630 15:42:57.020798 1605949 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.key.2d099ecc -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.key
	I0630 15:42:57.020848 1605949 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/proxy-client.key
	I0630 15:42:57.020862 1605949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/proxy-client.crt with IP's: []
	I0630 15:42:57.232794 1605949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/proxy-client.crt ...
	I0630 15:42:57.232813 1605949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/proxy-client.crt: {Name:mkacf9793713a43b155bb76a7c19d4eb9efa63ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:57.233042 1605949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/proxy-client.key ...
	I0630 15:42:57.233057 1605949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/proxy-client.key: {Name:mk6fe18ad59f999650fb745a250e80bbfd920e6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:42:57.233264 1605949 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:42:57.233298 1605949 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:42:57.233305 1605949 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:42:57.233327 1605949 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:42:57.233345 1605949 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:42:57.233362 1605949 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:42:57.233427 1605949 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:42:57.234001 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:42:57.278071 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:42:57.311301 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:42:57.342252 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:42:57.381336 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I0630 15:42:57.413062 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0630 15:42:57.446214 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:42:57.480216 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/cert-options-329017/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0630 15:42:57.514145 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:42:57.546211 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:42:57.581889 1605949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:42:57.618861 1605949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:42:57.640931 1605949 ssh_runner.go:195] Run: openssl version
	I0630 15:42:57.648289 1605949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:42:57.664795 1605949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:42:57.670513 1605949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:42:57.670590 1605949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:42:57.678615 1605949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:42:57.693625 1605949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:42:57.708990 1605949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:42:57.714379 1605949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:42:57.714446 1605949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:42:57.721992 1605949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:42:57.736189 1605949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:42:57.749669 1605949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:42:57.754455 1605949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:42:57.754517 1605949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:42:57.761453 1605949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:42:57.775792 1605949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:42:57.780359 1605949 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:42:57.780408 1605949 kubeadm.go:392] StartCluster: {Name:cert-options-329017 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:cert-op
tions-329017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8555 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:42:57.780481 1605949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:42:57.780551 1605949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:42:57.825113 1605949 cri.go:89] found id: ""
	I0630 15:42:57.825189 1605949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:42:57.841286 1605949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:42:57.856287 1605949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:42:57.869769 1605949 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:42:57.869782 1605949 kubeadm.go:157] found existing configuration files:
	
	I0630 15:42:57.869844 1605949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I0630 15:42:57.881535 1605949 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:42:57.881587 1605949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:42:57.894089 1605949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I0630 15:42:57.905813 1605949 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:42:57.905879 1605949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:42:57.918694 1605949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I0630 15:42:57.930288 1605949 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:42:57.930351 1605949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:42:57.942257 1605949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I0630 15:42:57.958168 1605949 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:42:57.958224 1605949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:42:57.969661 1605949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:42:58.141120 1605949 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:42:57.701682 1606244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:42:57.701724 1606244 machine.go:96] duration metric: took 8.653605377s to provisionDockerMachine
	I0630 15:42:57.701743 1606244 start.go:293] postStartSetup for "kubernetes-upgrade-691468" (driver="kvm2")
	I0630 15:42:57.701762 1606244 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:42:57.701793 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:57.702376 1606244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:42:57.702422 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:57.705977 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.706337 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:57.706364 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.706583 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:57.706821 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:57.707017 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:57.707248 1606244 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:42:57.801380 1606244 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:42:57.806491 1606244 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:42:57.806521 1606244 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:42:57.806617 1606244 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:42:57.806741 1606244 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:42:57.806848 1606244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:42:57.820406 1606244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:42:57.859598 1606244 start.go:296] duration metric: took 157.817867ms for postStartSetup
	I0630 15:42:57.859658 1606244 fix.go:56] duration metric: took 8.840366789s for fixHost
	I0630 15:42:57.859691 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:57.862786 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.863178 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:57.863219 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.863372 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:57.863616 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:57.863849 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:57.864021 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:57.864244 1606244 main.go:141] libmachine: Using SSH client type: native
	I0630 15:42:57.864543 1606244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I0630 15:42:57.864557 1606244 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:42:57.983880 1606244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298177.978344560
	
	I0630 15:42:57.983914 1606244 fix.go:216] guest clock: 1751298177.978344560
	I0630 15:42:57.983925 1606244 fix.go:229] Guest: 2025-06-30 15:42:57.97834456 +0000 UTC Remote: 2025-06-30 15:42:57.859665509 +0000 UTC m=+22.056678817 (delta=118.679051ms)
	I0630 15:42:57.983957 1606244 fix.go:200] guest clock delta is within tolerance: 118.679051ms
	I0630 15:42:57.983966 1606244 start.go:83] releasing machines lock for "kubernetes-upgrade-691468", held for 8.964714481s
	I0630 15:42:57.983993 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:57.984366 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetIP
	I0630 15:42:57.988242 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.988737 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:57.988771 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.989052 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:57.989854 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:57.990109 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .DriverName
	I0630 15:42:57.990228 1606244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:42:57.990307 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:57.990384 1606244 ssh_runner.go:195] Run: cat /version.json
	I0630 15:42:57.990409 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHHostname
	I0630 15:42:57.993718 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.993912 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.993998 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:57.994029 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.994287 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:57.994327 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c2:6f", ip: ""} in network mk-kubernetes-upgrade-691468: {Iface:virbr1 ExpiryTime:2025-06-30 16:42:05 +0000 UTC Type:0 Mac:52:54:00:ee:c2:6f Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:kubernetes-upgrade-691468 Clientid:01:52:54:00:ee:c2:6f}
	I0630 15:42:57.994355 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) DBG | domain kubernetes-upgrade-691468 has defined IP address 192.168.50.75 and MAC address 52:54:00:ee:c2:6f in network mk-kubernetes-upgrade-691468
	I0630 15:42:57.994517 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:57.994548 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHPort
	I0630 15:42:57.994654 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:57.994727 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHKeyPath
	I0630 15:42:57.994821 1606244 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:42:57.994876 1606244 main.go:141] libmachine: (kubernetes-upgrade-691468) Calling .GetSSHUsername
	I0630 15:42:57.995005 1606244 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/kubernetes-upgrade-691468/id_rsa Username:docker}
	I0630 15:42:58.110183 1606244 ssh_runner.go:195] Run: systemctl --version
	I0630 15:42:58.116433 1606244 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:42:58.272988 1606244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:42:58.279981 1606244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:42:58.280067 1606244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:42:58.291149 1606244 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0630 15:42:58.291180 1606244 start.go:495] detecting cgroup driver to use...
	I0630 15:42:58.291262 1606244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:42:58.314604 1606244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:42:58.331763 1606244 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:42:58.331865 1606244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:42:58.350145 1606244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:42:58.365679 1606244 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:42:58.695484 1606244 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:42:59.051853 1606244 docker.go:246] disabling docker service ...
	I0630 15:42:59.051951 1606244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:42:59.111018 1606244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:42:59.147834 1606244 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:42:59.495504 1606244 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:42:59.865090 1606244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:42:59.911278 1606244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:42:59.956269 1606244 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:42:59.956331 1606244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:42:59.985902 1606244 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:42:59.985996 1606244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:00.005083 1606244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:00.024035 1606244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:00.053900 1606244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:43:00.078836 1606244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:00.109568 1606244 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:00.127624 1606244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:00.152545 1606244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:43:00.177001 1606244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:43:00.227198 1606244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:43:00.599407 1606244 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:43:01.475128 1606244 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:43:01.475257 1606244 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:43:01.481759 1606244 start.go:563] Will wait 60s for crictl version
	I0630 15:43:01.481840 1606244 ssh_runner.go:195] Run: which crictl
	I0630 15:43:01.485947 1606244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:43:01.527125 1606244 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:43:01.527217 1606244 ssh_runner.go:195] Run: crio --version
	I0630 15:43:01.560389 1606244 ssh_runner.go:195] Run: crio --version
	I0630 15:43:01.599444 1606244 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	
	
	==> CRI-O <==
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.192226858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ceb34bd-d849-45eb-884e-c8abba650d06 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.194754453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e8cb95f-18ba-47e7-b23f-6fc18cc5d564 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.196042755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298183196009143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e8cb95f-18ba-47e7-b23f-6fc18cc5d564 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.197103214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d9d11a5-69df-4c5d-97a0-0f90f0a7c409 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.197292603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d9d11a5-69df-4c5d-97a0-0f90f0a7c409 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.198100823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298165062493744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00,PodSandboxId:cf1e7e22016691df82d15f8dd0698d32394f2146d939aed292f73aa83b9f59c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298164464249583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e,PodSandboxId:c819720ebd365df537d8e20d712b54bb0a62e27b1f066c3a28bd7d1e1aec40af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298159813354656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449,PodSandboxId:81604ac034f43edb67026580e3ef5592a062de54dbed5be464655fa8440fbc3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751298159820239105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89
1c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4,PodSandboxId:2fe8b460098200fcbefcbfc9a7e1654b7ae9fde46ce475c18144d3a90238e690,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751298159827069090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93fb579e130c9d6f006d
7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0,PodSandboxId:6062c62e33576f07cb19866b15aeacb6e68b80c285497a6708b6f8e87fad8366,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298159775230420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.
kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298147078320735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd09
2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d,PodSandboxId:27b5e281bc5fdb0574461ea4d7d6661aa8539127bc2c5b8cdcbf66c5a139bc6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298144831650331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7,PodSandboxId:b24c2cc9a1481fd65083fe2352ac7300899ea13cf9e33101acf5c090c62652e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298144686868249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5,PodSandboxId:3f2647b4601a9b9510ed44fd0d1d67060c4024063b8fb7f8da929e32480bda36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298144565348583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4,PodSandboxId:619df19f77a01bbf26aaf9ee208296766c861c6227cdd8ea9a81bb651bf5c38f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298144422118394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 93fb579e130c9d6f006d7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba,PodSandboxId:d19ab491ab5ee6c48e618b086c5256911e31c9d2db9726d3ca6440f4c00fc57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298144375310301,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 891c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d9d11a5-69df-4c5d-97a0-0f90f0a7c409 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.267547538Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc689f54-6071-4134-af02-8760653fa94a name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.267748077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc689f54-6071-4134-af02-8760653fa94a name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.269631959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8114b300-d75b-4f66-b153-72dce1ec431e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.270221712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298183270191957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8114b300-d75b-4f66-b153-72dce1ec431e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.270988296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d48e0448-8b60-4aef-897d-0c5032cb9537 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.271082132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d48e0448-8b60-4aef-897d-0c5032cb9537 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.271402645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298165062493744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00,PodSandboxId:cf1e7e22016691df82d15f8dd0698d32394f2146d939aed292f73aa83b9f59c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298164464249583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e,PodSandboxId:c819720ebd365df537d8e20d712b54bb0a62e27b1f066c3a28bd7d1e1aec40af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298159813354656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449,PodSandboxId:81604ac034f43edb67026580e3ef5592a062de54dbed5be464655fa8440fbc3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751298159820239105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89
1c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4,PodSandboxId:2fe8b460098200fcbefcbfc9a7e1654b7ae9fde46ce475c18144d3a90238e690,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751298159827069090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93fb579e130c9d6f006d
7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0,PodSandboxId:6062c62e33576f07cb19866b15aeacb6e68b80c285497a6708b6f8e87fad8366,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298159775230420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.
kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298147078320735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd09
2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d,PodSandboxId:27b5e281bc5fdb0574461ea4d7d6661aa8539127bc2c5b8cdcbf66c5a139bc6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298144831650331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7,PodSandboxId:b24c2cc9a1481fd65083fe2352ac7300899ea13cf9e33101acf5c090c62652e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298144686868249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5,PodSandboxId:3f2647b4601a9b9510ed44fd0d1d67060c4024063b8fb7f8da929e32480bda36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298144565348583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4,PodSandboxId:619df19f77a01bbf26aaf9ee208296766c861c6227cdd8ea9a81bb651bf5c38f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298144422118394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 93fb579e130c9d6f006d7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba,PodSandboxId:d19ab491ab5ee6c48e618b086c5256911e31c9d2db9726d3ca6440f4c00fc57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298144375310301,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 891c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d48e0448-8b60-4aef-897d-0c5032cb9537 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.334115264Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3bbc15f-10a3-4b50-9994-03ee0df96979 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.334233697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3bbc15f-10a3-4b50-9994-03ee0df96979 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.335869590Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed667c11-8ed7-4752-9acb-c121cfde6095 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.336479518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298183336392315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed667c11-8ed7-4752-9acb-c121cfde6095 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.337217824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5c45ea4-bed9-4e2f-a29a-107a22c0abbe name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.337305674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5c45ea4-bed9-4e2f-a29a-107a22c0abbe name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.337703961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298165062493744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00,PodSandboxId:cf1e7e22016691df82d15f8dd0698d32394f2146d939aed292f73aa83b9f59c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298164464249583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e,PodSandboxId:c819720ebd365df537d8e20d712b54bb0a62e27b1f066c3a28bd7d1e1aec40af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298159813354656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449,PodSandboxId:81604ac034f43edb67026580e3ef5592a062de54dbed5be464655fa8440fbc3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751298159820239105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89
1c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4,PodSandboxId:2fe8b460098200fcbefcbfc9a7e1654b7ae9fde46ce475c18144d3a90238e690,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751298159827069090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93fb579e130c9d6f006d
7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0,PodSandboxId:6062c62e33576f07cb19866b15aeacb6e68b80c285497a6708b6f8e87fad8366,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298159775230420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.
kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_EXITED,CreatedAt:1751298147078320735,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd09
2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d,PodSandboxId:27b5e281bc5fdb0574461ea4d7d6661aa8539127bc2c5b8cdcbf66c5a139bc6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_EXITED,CreatedAt:1751298144831650331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7,PodSandboxId:b24c2cc9a1481fd65083fe2352ac7300899ea13cf9e33101acf5c090c62652e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_EXITED,CreatedAt:1751298144686868249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5,PodSandboxId:3f2647b4601a9b9510ed44fd0d1d67060c4024063b8fb7f8da929e32480bda36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_EXITED,CreatedAt:1751298144565348583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4,PodSandboxId:619df19f77a01bbf26aaf9ee208296766c861c6227cdd8ea9a81bb651bf5c38f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_EXITED,CreatedAt:1751298144422118394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 93fb579e130c9d6f006d7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba,PodSandboxId:d19ab491ab5ee6c48e618b086c5256911e31c9d2db9726d3ca6440f4c00fc57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_EXITED,CreatedAt:1751298144375310301,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 891c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5c45ea4-bed9-4e2f-a29a-107a22c0abbe name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.340686538Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0904b8a-592c-45ad-803f-515f0c914485 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.340924184Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&PodSandboxMetadata{Name:coredns-674b8bbfcf-m5x9v,Uid:fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1751298146648183102,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,k8s-app: kube-dns,pod-template-hash: 674b8bbfcf,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-06-30T15:41:20.586212673Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c819720ebd365df537d8e20d712b54bb0a62e27b1f066c3a28bd7d1e1aec40af,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-011818,Uid:348378b245f2f84637e6c74a775a1f14,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1751298146301995791,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 348378b245f2f84637e6c74a775a1f14,kubernetes.io/config.seen: 2025-06-30T15:41:15.357972185Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6062c62e33576f07cb19866b15aeacb6e68b80c285497a6708b6f8e87fad8366,Metadata:&PodSandboxMetadata{Name:etcd-pause-011818,Uid:7e83b9769c7ab2096e0acb50384b7cb0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1751298146299220575,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,tier: cont
rol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.93:2379,kubernetes.io/config.hash: 7e83b9769c7ab2096e0acb50384b7cb0,kubernetes.io/config.seen: 2025-06-30T15:41:15.357967246Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:81604ac034f43edb67026580e3ef5592a062de54dbed5be464655fa8440fbc3f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-011818,Uid:891c29df416c90e174a5864263ac6202,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1751298146248104040,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 891c29df416c90e174a5864263ac6202,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.93:8443,kubernetes.io/config.hash: 891c29df416c90e174a5864263ac6202,kubernetes.io/config.seen: 2025-06-30T15
:41:15.357971158Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cf1e7e22016691df82d15f8dd0698d32394f2146d939aed292f73aa83b9f59c3,Metadata:&PodSandboxMetadata{Name:kube-proxy-mgmjs,Uid:8b8ac108-b30d-4905-8502-8bfde43240da,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1751298146233305548,Labels:map[string]string{controller-revision-hash: 7f964d48ff,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8ac108-b30d-4905-8502-8bfde43240da,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-06-30T15:41:20.402519330Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2fe8b460098200fcbefcbfc9a7e1654b7ae9fde46ce475c18144d3a90238e690,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-011818,Uid:93fb579e130c9d6f006d7c2e7b8787b6,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1751298146125926658,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93fb579e130c9d6f006d7c2e7b8787b6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 93fb579e130c9d6f006d7c2e7b8787b6,kubernetes.io/config.seen: 2025-06-30T15:41:15.357973114Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d0904b8a-592c-45ad-803f-515f0c914485 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.341933507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d589271-c045-425f-878c-85dd943fa89a name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.342011365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d589271-c045-425f-878c-85dd943fa89a name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:43:03 pause-011818 crio[3209]: time="2025-06-30 15:43:03.342217325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c,PodSandboxId:303940235d0803e7f2af6fd42d808bee5c88b036e98ae20196de66e2bae82510,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,State:CONTAINER_RUNNING,CreatedAt:1751298165062493744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-674b8bbfcf-m5x9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62,},Annotations:map[string]string{io.kubernetes.container.hash: eafd092d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00,PodSandboxId:cf1e7e22016691df82d15f8dd0698d32394f2146d939aed292f73aa83b9f59c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,State:CONTAINER_RUNNING,CreatedAt:1751298164464249583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 8b8ac108-b30d-4905-8502-8bfde43240da,},Annotations:map[string]string{io.kubernetes.container.hash: da6b8150,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e,PodSandboxId:c819720ebd365df537d8e20d712b54bb0a62e27b1f066c3a28bd7d1e1aec40af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,State:CONTAINER_RUNNING,CreatedAt:1751298159813354656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 348378b245f2f84637e6c74a775a1f14,},Annotations:map[string]string{io.kubernetes.container.hash: 8261a30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449,PodSandboxId:81604ac034f43edb67026580e3ef5592a062de54dbed5be464655fa8440fbc3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,State:CONTAINER_RUNNING,CreatedAt:1751298159820239105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89
1c29df416c90e174a5864263ac6202,},Annotations:map[string]string{io.kubernetes.container.hash: e4dd5970,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4,PodSandboxId:2fe8b460098200fcbefcbfc9a7e1654b7ae9fde46ce475c18144d3a90238e690,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,State:CONTAINER_RUNNING,CreatedAt:1751298159827069090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93fb579e130c9d6f006d
7c2e7b8787b6,},Annotations:map[string]string{io.kubernetes.container.hash: c7eb0318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0,PodSandboxId:6062c62e33576f07cb19866b15aeacb6e68b80c285497a6708b6f8e87fad8366,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,State:CONTAINER_RUNNING,CreatedAt:1751298159775230420,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-011818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e83b9769c7ab2096e0acb50384b7cb0,},Annotations:map[string]string{io.
kubernetes.container.hash: 77f174b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d589271-c045-425f-878c-85dd943fa89a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10c7dd782d613       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   18 seconds ago      Running             coredns                   2                   303940235d080       coredns-674b8bbfcf-m5x9v
	5e3cf46766f91       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19   18 seconds ago      Running             kube-proxy                2                   cf1e7e2201669       kube-proxy-mgmjs
	3238028e858f5       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b   23 seconds ago      Running             kube-scheduler            2                   2fe8b46009820       kube-scheduler-pause-011818
	09c748be02f81       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e   23 seconds ago      Running             kube-apiserver            2                   81604ac034f43       kube-apiserver-pause-011818
	c7cdc277a6a11       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2   23 seconds ago      Running             kube-controller-manager   2                   c819720ebd365       kube-controller-manager-pause-011818
	d66b32dd776d9       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   23 seconds ago      Running             etcd                      2                   6062c62e33576       etcd-pause-011818
	5b9dfb17fd2a2       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   36 seconds ago      Exited              coredns                   1                   303940235d080       coredns-674b8bbfcf-m5x9v
	7e163e9ac9670       661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19   38 seconds ago      Exited              kube-proxy                1                   27b5e281bc5fd       kube-proxy-mgmjs
	308e6e5defd8c       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   38 seconds ago      Exited              etcd                      1                   b24c2cc9a1481       etcd-pause-011818
	94e00cb01194c       ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2   38 seconds ago      Exited              kube-controller-manager   1                   3f2647b4601a9       kube-controller-manager-pause-011818
	3bd671174d97d       cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b   39 seconds ago      Exited              kube-scheduler            1                   619df19f77a01       kube-scheduler-pause-011818
	77a634c0334d1       ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e   39 seconds ago      Exited              kube-apiserver            1                   d19ab491ab5ee       kube-apiserver-pause-011818
	
	
	==> coredns [10c7dd782d61371946b81706b9a5fce1aba4d8d76c004a8dbb8372b0d081f53c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:36043 - 6235 "HINFO IN 479217239100511728.334149645577510968. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.037237828s
	
	
	==> coredns [5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47160 - 43593 "HINFO IN 6357363114233410437.8443103519251798570. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044394046s
	
	
	==> describe nodes <==
	Name:               pause-011818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-011818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=pause-011818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T15_41_16_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 15:41:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-011818
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 15:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 15:42:43 +0000   Mon, 30 Jun 2025 15:41:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 15:42:43 +0000   Mon, 30 Jun 2025 15:41:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 15:42:43 +0000   Mon, 30 Jun 2025 15:41:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 15:42:43 +0000   Mon, 30 Jun 2025 15:41:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.93
	  Hostname:    pause-011818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3044784Ki
	  pods:               110
	System Info:
	  Machine ID:                 270c033ae6994c2ea575daba35bfc05b
	  System UUID:                270c033a-e699-4c2e-a575-daba35bfc05b
	  Boot ID:                    863ffb22-37fa-4962-acfc-7c93023ccee4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-674b8bbfcf-m5x9v                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     103s
	  kube-system                 etcd-pause-011818                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         108s
	  kube-system                 kube-apiserver-pause-011818             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-pause-011818    200m (10%)    0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-mgmjs                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-pause-011818             100m (5%)     0 (0%)      0 (0%)           0 (0%)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     108s               kubelet          Node pause-011818 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node pause-011818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node pause-011818 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  NodeReady                107s               kubelet          Node pause-011818 status is now: NodeReady
	  Normal  RegisteredNode           104s               node-controller  Node pause-011818 event: Registered Node pause-011818 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-011818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-011818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-011818 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-011818 event: Registered Node pause-011818 in Controller
	
	
	==> dmesg <==
	[Jun30 15:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000002] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001119] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002988] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.162940] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.091074] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 15:41] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.110929] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.136062] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.385295] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.760443] kauditd_printk_skb: 69 callbacks suppressed
	[Jun30 15:42] kauditd_printk_skb: 199 callbacks suppressed
	[  +5.531124] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.156833] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [308e6e5defd8c354f116f1dc4cdc3e6f16fa868c80ef9cb726c399f1ba998ef7] <==
	
	
	==> etcd [d66b32dd776d956976ddbc522ab22881caca715fa0b6038ff4f733c3179af1d0] <==
	{"level":"info","ts":"2025-06-30T15:42:41.568743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa received MsgPreVoteResp from 4e6e2c9029caadaa at term 2"}
	{"level":"info","ts":"2025-06-30T15:42:41.568775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa became candidate at term 3"}
	{"level":"info","ts":"2025-06-30T15:42:41.568820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa received MsgVoteResp from 4e6e2c9029caadaa at term 3"}
	{"level":"info","ts":"2025-06-30T15:42:41.568840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6e2c9029caadaa became leader at term 3"}
	{"level":"info","ts":"2025-06-30T15:42:41.568871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6e2c9029caadaa elected leader 4e6e2c9029caadaa at term 3"}
	{"level":"info","ts":"2025-06-30T15:42:41.578925Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"4e6e2c9029caadaa","local-member-attributes":"{Name:pause-011818 ClientURLs:[https://192.168.61.93:2379]}","request-path":"/0/members/4e6e2c9029caadaa/attributes","cluster-id":"4a4285095021b5a3","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T15:42:41.579134Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:42:41.579528Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T15:42:41.581978Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T15:42:41.582728Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T15:42:41.584994Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T15:42:41.588139Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.93:2379"}
	{"level":"info","ts":"2025-06-30T15:42:41.586511Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T15:42:41.608102Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T15:42:54.513777Z","caller":"traceutil/trace.go:171","msg":"trace[798175544] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"121.520399ms","start":"2025-06-30T15:42:54.392232Z","end":"2025-06-30T15:42:54.513753Z","steps":["trace[798175544] 'process raft request'  (duration: 59.551047ms)","trace[798175544] 'compare'  (duration: 61.832353ms)"],"step_count":2}
	{"level":"info","ts":"2025-06-30T15:42:56.032078Z","caller":"traceutil/trace.go:171","msg":"trace[1434113713] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"117.006114ms","start":"2025-06-30T15:42:55.915051Z","end":"2025-06-30T15:42:56.032057Z","steps":["trace[1434113713] 'process raft request'  (duration: 116.598514ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T15:42:56.227238Z","caller":"traceutil/trace.go:171","msg":"trace[253546296] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"180.955801ms","start":"2025-06-30T15:42:56.046262Z","end":"2025-06-30T15:42:56.227217Z","steps":["trace[253546296] 'process raft request'  (duration: 138.645195ms)","trace[253546296] 'compare'  (duration: 42.195884ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T15:42:56.660880Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.043969ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12513981371954611519 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" mod_revision:407 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" value_size:4574 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-06-30T15:42:56.660998Z","caller":"traceutil/trace.go:171","msg":"trace[1557464774] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"125.350306ms","start":"2025-06-30T15:42:56.535636Z","end":"2025-06-30T15:42:56.660986Z","steps":["trace[1557464774] 'read index received'  (duration: 27.183µs)","trace[1557464774] 'applied index is now lower than readState.Index'  (duration: 125.322048ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T15:42:56.661061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.4193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-011818\" limit:1 ","response":"range_response_count:1 size:5747"}
	{"level":"info","ts":"2025-06-30T15:42:56.661076Z","caller":"traceutil/trace.go:171","msg":"trace[2050555326] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-011818; range_end:; response_count:1; response_revision:484; }","duration":"125.457893ms","start":"2025-06-30T15:42:56.535613Z","end":"2025-06-30T15:42:56.661071Z","steps":["trace[2050555326] 'agreement among raft nodes before linearized reading'  (duration: 125.406859ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T15:42:56.661253Z","caller":"traceutil/trace.go:171","msg":"trace[1185542617] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"420.539593ms","start":"2025-06-30T15:42:56.240702Z","end":"2025-06-30T15:42:56.661241Z","steps":["trace[1185542617] 'process raft request'  (duration: 173.248259ms)","trace[1185542617] 'compare'  (duration: 245.949958ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T15:42:56.664936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-30T15:42:56.240684Z","time spent":"424.184362ms","remote":"127.0.0.1:34688","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4636,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" mod_revision:407 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" value_size:4574 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-011818\" > >"}
	{"level":"warn","ts":"2025-06-30T15:42:56.989526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.139434ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T15:42:56.989605Z","caller":"traceutil/trace.go:171","msg":"trace[1701074428] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:484; }","duration":"198.234599ms","start":"2025-06-30T15:42:56.791359Z","end":"2025-06-30T15:42:56.989594Z","steps":["trace[1701074428] 'range keys from in-memory index tree'  (duration: 198.111801ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:43:03 up 2 min,  0 users,  load average: 0.58, 0.35, 0.14
	Linux pause-011818 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [09c748be02f81bbfb64f8034f033bfba342682e946638b7f9c97844a4f472449] <==
	I0630 15:42:43.302550       1 shared_informer.go:357] "Caches are synced" controller="configmaps"
	I0630 15:42:43.303178       1 shared_informer.go:357] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0630 15:42:43.303727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0630 15:42:43.305766       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0630 15:42:43.305893       1 shared_informer.go:357] "Caches are synced" controller="ipallocator-repair-controller"
	I0630 15:42:43.309504       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 15:42:43.312221       1 shared_informer.go:357] "Caches are synced" controller="crd-autoregister"
	I0630 15:42:43.312283       1 shared_informer.go:357] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0630 15:42:43.312328       1 default_servicecidr_controller.go:136] Shutting down kubernetes-service-cidr-controller
	I0630 15:42:43.312699       1 aggregator.go:171] initial CRD sync complete...
	I0630 15:42:43.312752       1 autoregister_controller.go:144] Starting autoregister controller
	I0630 15:42:43.312759       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0630 15:42:43.312764       1 cache.go:39] Caches are synced for autoregister controller
	E0630 15:42:43.315203       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0630 15:42:43.343756       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0630 15:42:43.369792       1 shared_informer.go:357] "Caches are synced" controller="node_authorizer"
	I0630 15:42:44.106933       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0630 15:42:44.531363       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0630 15:42:44.607867       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0630 15:42:44.656300       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0630 15:42:44.666588       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0630 15:42:46.736062       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0630 15:42:46.784082       1 controller.go:667] quota admission added evaluator for: endpoints
	I0630 15:42:46.843244       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 15:42:47.040040       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [77a634c0334d18e19033a82bcc8c388b899b48dd15d188eeb6f9bcb5f770c0ba] <==
	
	
	==> kube-controller-manager [94e00cb01194c9653cb7adb0c61d12610752c946cfd3cd44007dd121da4c2ba5] <==
	
	
	==> kube-controller-manager [c7cdc277a6a11393034ce82a133ea6de28b829cb2e09438ba6ba3f0eb720095e] <==
	I0630 15:42:46.537277       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0630 15:42:46.537412       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-011818"
	I0630 15:42:46.537590       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0630 15:42:46.541844       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0630 15:42:46.541880       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0630 15:42:46.545801       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0630 15:42:46.548009       1 shared_informer.go:357] "Caches are synced" controller="TTL"
	I0630 15:42:46.655066       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 15:42:46.721872       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0630 15:42:46.721892       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0630 15:42:46.724211       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0630 15:42:46.724301       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0630 15:42:46.731166       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 15:42:46.734249       1 shared_informer.go:357] "Caches are synced" controller="PV protection"
	I0630 15:42:46.734463       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0630 15:42:46.792011       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 15:42:46.792259       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0630 15:42:46.795663       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0630 15:42:46.799296       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 15:42:46.842575       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 15:42:46.881508       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0630 15:42:47.263576       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 15:42:47.331305       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 15:42:47.331337       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 15:42:47.331345       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5e3cf46766f91c99553803bc244a01bafa2f70edec9d9850124df2bd64796a00] <==
	E0630 15:42:44.662077       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 15:42:44.678791       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.61.93"]
	E0630 15:42:44.678905       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 15:42:44.727337       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 15:42:44.727392       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 15:42:44.727454       1 server_linux.go:145] "Using iptables Proxier"
	I0630 15:42:44.744151       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 15:42:44.744527       1 server.go:516] "Version info" version="v1.33.2"
	I0630 15:42:44.744552       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 15:42:44.749642       1 config.go:199] "Starting service config controller"
	I0630 15:42:44.750351       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 15:42:44.750379       1 config.go:105] "Starting endpoint slice config controller"
	I0630 15:42:44.750383       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 15:42:44.750407       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 15:42:44.750446       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 15:42:44.750532       1 config.go:329] "Starting node config controller"
	I0630 15:42:44.750536       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 15:42:44.850700       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0630 15:42:44.850817       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 15:42:44.851353       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 15:42:44.851726       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d] <==
	
	
	==> kube-scheduler [3238028e858f5654d58bc054f4a0f7f8ed766ce6576ccc957985f0bb8965c4e4] <==
	W0630 15:42:43.130637       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0630 15:42:43.130702       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0630 15:42:43.237067       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.2"
	I0630 15:42:43.239506       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 15:42:43.244200       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0630 15:42:43.244583       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 15:42:43.244673       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0630 15:42:43.244709       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0630 15:42:43.262547       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 15:42:43.262876       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 15:42:43.263090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 15:42:43.263286       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 15:42:43.263406       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 15:42:43.263620       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0630 15:42:43.263832       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 15:42:43.263927       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 15:42:43.266801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 15:42:43.267178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 15:42:43.267509       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 15:42:43.267845       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 15:42:43.268099       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 15:42:43.268394       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 15:42:43.268633       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 15:42:43.269310       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0630 15:42:44.567014       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [3bd671174d97d6381afb9be87db3443e4a7aea655ea5b06aae2bfcf5a03c47a4] <==
	
	
	==> kubelet <==
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.266069    3993 status_manager.go:895] "Failed to get status for pod" podUID="fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62" pod="kube-system/coredns-674b8bbfcf-m5x9v" err="pods \"coredns-674b8bbfcf-m5x9v\" is forbidden: User \"system:node:pause-011818\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-011818' and this object"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.268614    3993 status_manager.go:895] "Failed to get status for pod" podUID="8b8ac108-b30d-4905-8502-8bfde43240da" pod="kube-system/kube-proxy-mgmjs" err="pods \"kube-proxy-mgmjs\" is forbidden: User \"system:node:pause-011818\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-011818' and this object"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.273269    3993 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.275677    3993 status_manager.go:895] "Failed to get status for pod" podUID="7e83b9769c7ab2096e0acb50384b7cb0" pod="kube-system/etcd-pause-011818" err="pods \"etcd-pause-011818\" is forbidden: User \"system:node:pause-011818\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-011818' and this object"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.281902    3993 status_manager.go:895] "Failed to get status for pod" podUID="891c29df416c90e174a5864263ac6202" pod="kube-system/kube-apiserver-pause-011818" err=<
	Jun 30 15:42:43 pause-011818 kubelet[3993]:         pods "kube-apiserver-pause-011818" is forbidden: User "system:node:pause-011818" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-011818' and this object
	Jun 30 15:42:43 pause-011818 kubelet[3993]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	Jun 30 15:42:43 pause-011818 kubelet[3993]:  >
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.336762    3993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8ac108-b30d-4905-8502-8bfde43240da-xtables-lock\") pod \"kube-proxy-mgmjs\" (UID: \"8b8ac108-b30d-4905-8502-8bfde43240da\") " pod="kube-system/kube-proxy-mgmjs"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.336970    3993 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8ac108-b30d-4905-8502-8bfde43240da-lib-modules\") pod \"kube-proxy-mgmjs\" (UID: \"8b8ac108-b30d-4905-8502-8bfde43240da\") " pod="kube-system/kube-proxy-mgmjs"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.400312    3993 kubelet_node_status.go:124] "Node was previously registered" node="pause-011818"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.400552    3993 kubelet_node_status.go:78] "Successfully registered node" node="pause-011818"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.400619    3993 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.401850    3993 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: I0630 15:42:43.447311    3993 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-011818"
	Jun 30 15:42:43 pause-011818 kubelet[3993]: E0630 15:42:43.461035    3993 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-011818\" already exists" pod="kube-system/kube-apiserver-pause-011818"
	Jun 30 15:42:44 pause-011818 kubelet[3993]: E0630 15:42:44.341595    3993 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jun 30 15:42:44 pause-011818 kubelet[3993]: E0630 15:42:44.341874    3993 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62-config-volume podName:fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62 nodeName:}" failed. No retries permitted until 2025-06-30 15:42:44.841843056 +0000 UTC m=+5.715308111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62-config-volume") pod "coredns-674b8bbfcf-m5x9v" (UID: "fc43ab9f-c4cf-4732-a418-9f6c2e8b7d62") : failed to sync configmap cache: timed out waiting for the condition
	Jun 30 15:42:44 pause-011818 kubelet[3993]: I0630 15:42:44.451933    3993 scope.go:117] "RemoveContainer" containerID="7e163e9ac9670a18c40d533cddf617ad6c870ce062ee7309c531bbf316de593d"
	Jun 30 15:42:45 pause-011818 kubelet[3993]: I0630 15:42:45.051128    3993 scope.go:117] "RemoveContainer" containerID="5b9dfb17fd2a2bd6e55c7aaabb227c8c62192acb316b058bc647a34548a9f10b"
	Jun 30 15:42:49 pause-011818 kubelet[3993]: E0630 15:42:49.397683    3993 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298169397058046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 15:42:49 pause-011818 kubelet[3993]: E0630 15:42:49.397732    3993 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298169397058046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 15:42:50 pause-011818 kubelet[3993]: I0630 15:42:50.202742    3993 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 30 15:42:59 pause-011818 kubelet[3993]: E0630 15:42:59.401174    3993 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298179400291191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jun 30 15:42:59 pause-011818 kubelet[3993]: E0630 15:42:59.401771    3993 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751298179400291191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125816,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-011818 -n pause-011818
helpers_test.go:261: (dbg) Run:  kubectl --context pause-011818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (91.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (273.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-836310 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-836310 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m32.815203192s)

                                                
                                                
-- stdout --
	* [old-k8s-version-836310] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-836310" primary control-plane node in "old-k8s-version-836310" cluster
	* Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:43:05.857884 1606769 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:43:05.858280 1606769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:43:05.858295 1606769 out.go:358] Setting ErrFile to fd 2...
	I0630 15:43:05.858300 1606769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:43:05.858528 1606769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:43:05.859176 1606769 out.go:352] Setting JSON to false
	I0630 15:43:05.860456 1606769 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33878,"bootTime":1751264308,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:43:05.860578 1606769 start.go:140] virtualization: kvm guest
	I0630 15:43:05.862839 1606769 out.go:177] * [old-k8s-version-836310] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:43:05.864267 1606769 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:43:05.864322 1606769 notify.go:220] Checking for updates...
	I0630 15:43:05.866678 1606769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:43:05.868022 1606769 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:43:05.869185 1606769 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:43:05.870619 1606769 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:43:05.871840 1606769 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:43:05.873576 1606769 config.go:182] Loaded profile config "cert-expiration-775975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:43:05.873722 1606769 config.go:182] Loaded profile config "cert-options-329017": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:43:05.873856 1606769 config.go:182] Loaded profile config "kubernetes-upgrade-691468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:43:05.874008 1606769 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:43:05.915329 1606769 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 15:43:05.916564 1606769 start.go:304] selected driver: kvm2
	I0630 15:43:05.916578 1606769 start.go:908] validating driver "kvm2" against <nil>
	I0630 15:43:05.916589 1606769 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:43:05.917308 1606769 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:05.917430 1606769 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:43:05.935506 1606769 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:43:05.935582 1606769 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 15:43:05.935972 1606769 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:43:05.936025 1606769 cni.go:84] Creating CNI manager for ""
	I0630 15:43:05.936085 1606769 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:43:05.936094 1606769 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 15:43:05.936200 1606769 start.go:347] cluster config:
	{Name:old-k8s-version-836310 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-836310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:43:05.936327 1606769 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:43:05.938161 1606769 out.go:177] * Starting "old-k8s-version-836310" primary control-plane node in "old-k8s-version-836310" cluster
	I0630 15:43:05.939504 1606769 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0630 15:43:05.939573 1606769 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0630 15:43:05.939589 1606769 cache.go:56] Caching tarball of preloaded images
	I0630 15:43:05.939727 1606769 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:43:05.939742 1606769 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0630 15:43:05.939860 1606769 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/config.json ...
	I0630 15:43:05.939959 1606769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/config.json: {Name:mk0f52dce51d02462a458f95b1db15d785dd8567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:05.940165 1606769 start.go:360] acquireMachinesLock for old-k8s-version-836310: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:43:05.940208 1606769 start.go:364] duration metric: took 25.737µs to acquireMachinesLock for "old-k8s-version-836310"
	I0630 15:43:05.940231 1606769 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-836310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
20.0 ClusterName:old-k8s-version-836310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:43:05.940320 1606769 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 15:43:05.942722 1606769 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0630 15:43:05.942944 1606769 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:43:05.943011 1606769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:43:05.959785 1606769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I0630 15:43:05.960296 1606769 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:43:05.960967 1606769 main.go:141] libmachine: Using API Version  1
	I0630 15:43:05.960994 1606769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:43:05.961472 1606769 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:43:05.961695 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetMachineName
	I0630 15:43:05.961897 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:43:05.962079 1606769 start.go:159] libmachine.API.Create for "old-k8s-version-836310" (driver="kvm2")
	I0630 15:43:05.962113 1606769 client.go:168] LocalClient.Create starting
	I0630 15:43:05.962155 1606769 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 15:43:05.962220 1606769 main.go:141] libmachine: Decoding PEM data...
	I0630 15:43:05.962257 1606769 main.go:141] libmachine: Parsing certificate...
	I0630 15:43:05.962356 1606769 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 15:43:05.962388 1606769 main.go:141] libmachine: Decoding PEM data...
	I0630 15:43:05.962408 1606769 main.go:141] libmachine: Parsing certificate...
	I0630 15:43:05.962433 1606769 main.go:141] libmachine: Running pre-create checks...
	I0630 15:43:05.962453 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .PreCreateCheck
	I0630 15:43:05.962852 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetConfigRaw
	I0630 15:43:05.963326 1606769 main.go:141] libmachine: Creating machine...
	I0630 15:43:05.963341 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .Create
	I0630 15:43:05.963527 1606769 main.go:141] libmachine: (old-k8s-version-836310) creating KVM machine...
	I0630 15:43:05.963547 1606769 main.go:141] libmachine: (old-k8s-version-836310) creating network...
	I0630 15:43:05.965330 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found existing default KVM network
	I0630 15:43:05.966641 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:05.966415 1606791 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:bd:bf:63} reservation:<nil>}
	I0630 15:43:05.967294 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:05.967197 1606791 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:93:42} reservation:<nil>}
	I0630 15:43:05.968531 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:05.968456 1606791 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000286d70}
	I0630 15:43:05.968583 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | created network xml: 
	I0630 15:43:05.968610 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | <network>
	I0630 15:43:05.968638 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG |   <name>mk-old-k8s-version-836310</name>
	I0630 15:43:05.968653 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG |   <dns enable='no'/>
	I0630 15:43:05.968662 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG |   
	I0630 15:43:05.968674 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0630 15:43:05.968692 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG |     <dhcp>
	I0630 15:43:05.968701 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0630 15:43:05.968724 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG |     </dhcp>
	I0630 15:43:05.968738 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG |   </ip>
	I0630 15:43:05.968775 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG |   
	I0630 15:43:05.968799 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | </network>
	I0630 15:43:05.968814 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | 
	I0630 15:43:05.975079 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | trying to create private KVM network mk-old-k8s-version-836310 192.168.61.0/24...
	I0630 15:43:06.064920 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | private KVM network mk-old-k8s-version-836310 192.168.61.0/24 created
	I0630 15:43:06.065067 1606769 main.go:141] libmachine: (old-k8s-version-836310) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310 ...
	I0630 15:43:06.065119 1606769 main.go:141] libmachine: (old-k8s-version-836310) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 15:43:06.065136 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:06.065058 1606791 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:43:06.065206 1606769 main.go:141] libmachine: (old-k8s-version-836310) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 15:43:06.407813 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:06.407660 1606791 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa...
	I0630 15:43:07.067323 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:07.067168 1606791 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/old-k8s-version-836310.rawdisk...
	I0630 15:43:07.067384 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | Writing magic tar header
	I0630 15:43:07.067405 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | Writing SSH key tar header
	I0630 15:43:07.067418 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:07.067331 1606791 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310 ...
	I0630 15:43:07.067535 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310
	I0630 15:43:07.067568 1606769 main.go:141] libmachine: (old-k8s-version-836310) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310 (perms=drwx------)
	I0630 15:43:07.067580 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 15:43:07.067600 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:43:07.067616 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 15:43:07.067627 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 15:43:07.067635 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | checking permissions on dir: /home/jenkins
	I0630 15:43:07.067654 1606769 main.go:141] libmachine: (old-k8s-version-836310) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 15:43:07.067667 1606769 main.go:141] libmachine: (old-k8s-version-836310) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 15:43:07.067675 1606769 main.go:141] libmachine: (old-k8s-version-836310) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 15:43:07.067688 1606769 main.go:141] libmachine: (old-k8s-version-836310) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 15:43:07.067720 1606769 main.go:141] libmachine: (old-k8s-version-836310) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 15:43:07.067732 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | checking permissions on dir: /home
	I0630 15:43:07.067745 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | skipping /home - not owner
	I0630 15:43:07.067758 1606769 main.go:141] libmachine: (old-k8s-version-836310) creating domain...
	I0630 15:43:07.068871 1606769 main.go:141] libmachine: (old-k8s-version-836310) define libvirt domain using xml: 
	I0630 15:43:07.068893 1606769 main.go:141] libmachine: (old-k8s-version-836310) <domain type='kvm'>
	I0630 15:43:07.068903 1606769 main.go:141] libmachine: (old-k8s-version-836310)   <name>old-k8s-version-836310</name>
	I0630 15:43:07.068910 1606769 main.go:141] libmachine: (old-k8s-version-836310)   <memory unit='MiB'>3072</memory>
	I0630 15:43:07.068919 1606769 main.go:141] libmachine: (old-k8s-version-836310)   <vcpu>2</vcpu>
	I0630 15:43:07.068925 1606769 main.go:141] libmachine: (old-k8s-version-836310)   <features>
	I0630 15:43:07.068935 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <acpi/>
	I0630 15:43:07.068945 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <apic/>
	I0630 15:43:07.068952 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <pae/>
	I0630 15:43:07.068960 1606769 main.go:141] libmachine: (old-k8s-version-836310)     
	I0630 15:43:07.068987 1606769 main.go:141] libmachine: (old-k8s-version-836310)   </features>
	I0630 15:43:07.069021 1606769 main.go:141] libmachine: (old-k8s-version-836310)   <cpu mode='host-passthrough'>
	I0630 15:43:07.069030 1606769 main.go:141] libmachine: (old-k8s-version-836310)   
	I0630 15:43:07.069037 1606769 main.go:141] libmachine: (old-k8s-version-836310)   </cpu>
	I0630 15:43:07.069046 1606769 main.go:141] libmachine: (old-k8s-version-836310)   <os>
	I0630 15:43:07.069057 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <type>hvm</type>
	I0630 15:43:07.069066 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <boot dev='cdrom'/>
	I0630 15:43:07.069076 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <boot dev='hd'/>
	I0630 15:43:07.069115 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <bootmenu enable='no'/>
	I0630 15:43:07.069134 1606769 main.go:141] libmachine: (old-k8s-version-836310)   </os>
	I0630 15:43:07.069143 1606769 main.go:141] libmachine: (old-k8s-version-836310)   <devices>
	I0630 15:43:07.069150 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <disk type='file' device='cdrom'>
	I0630 15:43:07.069160 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/boot2docker.iso'/>
	I0630 15:43:07.069166 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <target dev='hdc' bus='scsi'/>
	I0630 15:43:07.069171 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <readonly/>
	I0630 15:43:07.069178 1606769 main.go:141] libmachine: (old-k8s-version-836310)     </disk>
	I0630 15:43:07.069183 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <disk type='file' device='disk'>
	I0630 15:43:07.069189 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 15:43:07.069199 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/old-k8s-version-836310.rawdisk'/>
	I0630 15:43:07.069210 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <target dev='hda' bus='virtio'/>
	I0630 15:43:07.069222 1606769 main.go:141] libmachine: (old-k8s-version-836310)     </disk>
	I0630 15:43:07.069230 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <interface type='network'>
	I0630 15:43:07.069241 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <source network='mk-old-k8s-version-836310'/>
	I0630 15:43:07.069252 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <model type='virtio'/>
	I0630 15:43:07.069272 1606769 main.go:141] libmachine: (old-k8s-version-836310)     </interface>
	I0630 15:43:07.069287 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <interface type='network'>
	I0630 15:43:07.069293 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <source network='default'/>
	I0630 15:43:07.069300 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <model type='virtio'/>
	I0630 15:43:07.069305 1606769 main.go:141] libmachine: (old-k8s-version-836310)     </interface>
	I0630 15:43:07.069311 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <serial type='pty'>
	I0630 15:43:07.069320 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <target port='0'/>
	I0630 15:43:07.069331 1606769 main.go:141] libmachine: (old-k8s-version-836310)     </serial>
	I0630 15:43:07.069339 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <console type='pty'>
	I0630 15:43:07.069351 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <target type='serial' port='0'/>
	I0630 15:43:07.069383 1606769 main.go:141] libmachine: (old-k8s-version-836310)     </console>
	I0630 15:43:07.069425 1606769 main.go:141] libmachine: (old-k8s-version-836310)     <rng model='virtio'>
	I0630 15:43:07.069438 1606769 main.go:141] libmachine: (old-k8s-version-836310)       <backend model='random'>/dev/random</backend>
	I0630 15:43:07.069466 1606769 main.go:141] libmachine: (old-k8s-version-836310)     </rng>
	I0630 15:43:07.069477 1606769 main.go:141] libmachine: (old-k8s-version-836310)     
	I0630 15:43:07.069490 1606769 main.go:141] libmachine: (old-k8s-version-836310)     
	I0630 15:43:07.069501 1606769 main.go:141] libmachine: (old-k8s-version-836310)   </devices>
	I0630 15:43:07.069508 1606769 main.go:141] libmachine: (old-k8s-version-836310) </domain>
	I0630 15:43:07.069521 1606769 main.go:141] libmachine: (old-k8s-version-836310) 
	I0630 15:43:07.073950 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:6b:ef:87 in network default
	I0630 15:43:07.074530 1606769 main.go:141] libmachine: (old-k8s-version-836310) starting domain...
	I0630 15:43:07.074550 1606769 main.go:141] libmachine: (old-k8s-version-836310) ensuring networks are active...
	I0630 15:43:07.074558 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:07.075277 1606769 main.go:141] libmachine: (old-k8s-version-836310) Ensuring network default is active
	I0630 15:43:07.075694 1606769 main.go:141] libmachine: (old-k8s-version-836310) Ensuring network mk-old-k8s-version-836310 is active
	I0630 15:43:07.076294 1606769 main.go:141] libmachine: (old-k8s-version-836310) getting domain XML...
	I0630 15:43:07.077257 1606769 main.go:141] libmachine: (old-k8s-version-836310) creating domain...
	I0630 15:43:08.494823 1606769 main.go:141] libmachine: (old-k8s-version-836310) waiting for IP...
	I0630 15:43:08.495763 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:08.496368 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:08.496424 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:08.496349 1606791 retry.go:31] will retry after 198.333347ms: waiting for domain to come up
	I0630 15:43:08.696875 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:08.697474 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:08.697532 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:08.697463 1606791 retry.go:31] will retry after 362.910081ms: waiting for domain to come up
	I0630 15:43:09.062249 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:09.062843 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:09.062903 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:09.062807 1606791 retry.go:31] will retry after 395.24864ms: waiting for domain to come up
	I0630 15:43:09.459462 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:09.460099 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:09.460128 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:09.460041 1606791 retry.go:31] will retry after 500.939994ms: waiting for domain to come up
	I0630 15:43:09.963317 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:09.963945 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:09.963985 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:09.963879 1606791 retry.go:31] will retry after 629.979979ms: waiting for domain to come up
	I0630 15:43:10.595604 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:10.596047 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:10.596076 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:10.596008 1606791 retry.go:31] will retry after 776.908347ms: waiting for domain to come up
	I0630 15:43:11.374818 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:11.375406 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:11.375433 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:11.375381 1606791 retry.go:31] will retry after 1.062262318s: waiting for domain to come up
	I0630 15:43:12.439890 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:12.440837 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:12.440868 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:12.440809 1606791 retry.go:31] will retry after 934.442012ms: waiting for domain to come up
	I0630 15:43:13.377218 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:13.377833 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:13.377926 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:13.377841 1606791 retry.go:31] will retry after 1.383485531s: waiting for domain to come up
	I0630 15:43:14.762724 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:14.763428 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:14.763472 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:14.763400 1606791 retry.go:31] will retry after 2.063726231s: waiting for domain to come up
	I0630 15:43:16.829672 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:16.830349 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:16.830372 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:16.830258 1606791 retry.go:31] will retry after 1.939367083s: waiting for domain to come up
	I0630 15:43:18.772504 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:18.773128 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:18.773158 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:18.773077 1606791 retry.go:31] will retry after 2.327621357s: waiting for domain to come up
	I0630 15:43:21.102553 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:21.103146 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:21.103171 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:21.103110 1606791 retry.go:31] will retry after 3.90700663s: waiting for domain to come up
	I0630 15:43:25.014358 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:25.014841 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:43:25.014901 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:43:25.014822 1606791 retry.go:31] will retry after 4.834626086s: waiting for domain to come up
	I0630 15:43:29.853849 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:29.882451 1606769 main.go:141] libmachine: (old-k8s-version-836310) found domain IP: 192.168.61.88
	I0630 15:43:29.882492 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has current primary IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:29.882501 1606769 main.go:141] libmachine: (old-k8s-version-836310) reserving static IP address...
	I0630 15:43:29.883040 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-836310", mac: "52:54:00:5d:f3:de", ip: "192.168.61.88"} in network mk-old-k8s-version-836310
	I0630 15:43:30.257307 1606769 main.go:141] libmachine: (old-k8s-version-836310) reserved static IP address 192.168.61.88 for domain old-k8s-version-836310
	I0630 15:43:30.257344 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | Getting to WaitForSSH function...
	I0630 15:43:30.257353 1606769 main.go:141] libmachine: (old-k8s-version-836310) waiting for SSH...
	I0630 15:43:30.260446 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.260862 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:30.260897 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.261071 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | Using SSH client type: external
	I0630 15:43:30.261098 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa (-rw-------)
	I0630 15:43:30.261148 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:43:30.261160 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | About to run SSH command:
	I0630 15:43:30.261174 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | exit 0
	I0630 15:43:30.393785 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | SSH cmd err, output: <nil>: 
	I0630 15:43:30.394114 1606769 main.go:141] libmachine: (old-k8s-version-836310) KVM machine creation complete
	I0630 15:43:30.394425 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetConfigRaw
	I0630 15:43:30.394995 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:43:30.395277 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:43:30.395468 1606769 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:43:30.395482 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetState
	I0630 15:43:30.396897 1606769 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:43:30.396916 1606769 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:43:30.396923 1606769 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:43:30.396933 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:30.399845 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.400256 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:30.400296 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.400560 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:30.400779 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:30.400982 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:30.401290 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:30.401612 1606769 main.go:141] libmachine: Using SSH client type: native
	I0630 15:43:30.401884 1606769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:43:30.401900 1606769 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:43:30.516801 1606769 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:43:30.516831 1606769 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:43:30.516843 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:30.520484 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.520801 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:30.520834 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.521060 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:30.521326 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:30.521527 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:30.521684 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:30.521839 1606769 main.go:141] libmachine: Using SSH client type: native
	I0630 15:43:30.522094 1606769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:43:30.522113 1606769 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:43:30.638535 1606769 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:43:30.638644 1606769 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:43:30.638658 1606769 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:43:30.638669 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetMachineName
	I0630 15:43:30.638973 1606769 buildroot.go:166] provisioning hostname "old-k8s-version-836310"
	I0630 15:43:30.639014 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetMachineName
	I0630 15:43:30.639256 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:30.642108 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.642521 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:30.642548 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.642840 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:30.643046 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:30.643300 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:30.643449 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:30.643640 1606769 main.go:141] libmachine: Using SSH client type: native
	I0630 15:43:30.643964 1606769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:43:30.643983 1606769 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-836310 && echo "old-k8s-version-836310" | sudo tee /etc/hostname
	I0630 15:43:30.774729 1606769 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-836310
	
	I0630 15:43:30.774761 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:30.778697 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.779150 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:30.779177 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.779313 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:30.779524 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:30.779742 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:30.779945 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:30.780132 1606769 main.go:141] libmachine: Using SSH client type: native
	I0630 15:43:30.780345 1606769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:43:30.780360 1606769 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-836310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-836310/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-836310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:43:30.904031 1606769 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:43:30.904263 1606769 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:43:30.904299 1606769 buildroot.go:174] setting up certificates
	I0630 15:43:30.904316 1606769 provision.go:84] configureAuth start
	I0630 15:43:30.904353 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetMachineName
	I0630 15:43:30.904675 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetIP
	I0630 15:43:30.908113 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.908539 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:30.908569 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.908835 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:30.912143 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.912534 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:30.912555 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:30.912801 1606769 provision.go:143] copyHostCerts
	I0630 15:43:30.912870 1606769 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:43:30.912893 1606769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:43:30.912984 1606769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:43:30.913107 1606769 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:43:30.913123 1606769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:43:30.913169 1606769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:43:30.913277 1606769 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:43:30.913288 1606769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:43:30.913337 1606769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:43:30.913434 1606769 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-836310 san=[127.0.0.1 192.168.61.88 localhost minikube old-k8s-version-836310]
	I0630 15:43:31.467570 1606769 provision.go:177] copyRemoteCerts
	I0630 15:43:31.467634 1606769 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:43:31.467669 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:31.470892 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:31.471465 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:31.471500 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:31.471676 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:31.471915 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:31.472252 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:31.472471 1606769 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa Username:docker}
	I0630 15:43:31.561634 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0630 15:43:31.594489 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:43:31.627547 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:43:31.659888 1606769 provision.go:87] duration metric: took 755.549776ms to configureAuth
	I0630 15:43:31.659946 1606769 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:43:31.660243 1606769 config.go:182] Loaded profile config "old-k8s-version-836310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0630 15:43:31.660347 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:31.663824 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:31.664242 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:31.664277 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:31.664463 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:31.664688 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:31.664881 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:31.665039 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:31.665389 1606769 main.go:141] libmachine: Using SSH client type: native
	I0630 15:43:31.665722 1606769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:43:31.665748 1606769 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:43:31.901642 1606769 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:43:31.901681 1606769 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:43:31.901691 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetURL
	I0630 15:43:31.903200 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | using libvirt version 6000000
	I0630 15:43:31.906629 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:31.907016 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:31.907077 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:31.907349 1606769 main.go:141] libmachine: Docker is up and running!
	I0630 15:43:31.907370 1606769 main.go:141] libmachine: Reticulating splines...
	I0630 15:43:31.907391 1606769 client.go:171] duration metric: took 25.945253817s to LocalClient.Create
	I0630 15:43:31.907418 1606769 start.go:167] duration metric: took 25.94534242s to libmachine.API.Create "old-k8s-version-836310"
	I0630 15:43:31.907430 1606769 start.go:293] postStartSetup for "old-k8s-version-836310" (driver="kvm2")
	I0630 15:43:31.907463 1606769 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:43:31.907486 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:43:31.907796 1606769 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:43:31.907901 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:31.911734 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:31.912180 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:31.912211 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:31.912362 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:31.912548 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:31.912783 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:31.912982 1606769 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa Username:docker}
	I0630 15:43:32.001821 1606769 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:43:32.006618 1606769 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:43:32.006648 1606769 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:43:32.006720 1606769 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:43:32.006798 1606769 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:43:32.006901 1606769 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:43:32.018643 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:43:32.046455 1606769 start.go:296] duration metric: took 138.987145ms for postStartSetup
	I0630 15:43:32.046521 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetConfigRaw
	I0630 15:43:32.047176 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetIP
	I0630 15:43:32.050141 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.050473 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:32.050500 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.050772 1606769 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/config.json ...
	I0630 15:43:32.050962 1606769 start.go:128] duration metric: took 26.110630131s to createHost
	I0630 15:43:32.050987 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:32.053081 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.053458 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:32.053482 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.053665 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:32.053913 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:32.054124 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:32.054395 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:32.054693 1606769 main.go:141] libmachine: Using SSH client type: native
	I0630 15:43:32.054909 1606769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:43:32.054919 1606769 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:43:32.172667 1606769 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298212.148035227
	
	I0630 15:43:32.172693 1606769 fix.go:216] guest clock: 1751298212.148035227
	I0630 15:43:32.172703 1606769 fix.go:229] Guest: 2025-06-30 15:43:32.148035227 +0000 UTC Remote: 2025-06-30 15:43:32.050975529 +0000 UTC m=+26.242236458 (delta=97.059698ms)
	I0630 15:43:32.172731 1606769 fix.go:200] guest clock delta is within tolerance: 97.059698ms
	I0630 15:43:32.172737 1606769 start.go:83] releasing machines lock for "old-k8s-version-836310", held for 26.232520659s
	I0630 15:43:32.172772 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:43:32.173179 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetIP
	I0630 15:43:32.176375 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.176732 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:32.176761 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.176987 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:43:32.177673 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:43:32.177920 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:43:32.178015 1606769 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:43:32.178064 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:32.178164 1606769 ssh_runner.go:195] Run: cat /version.json
	I0630 15:43:32.178183 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:43:32.181002 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.181511 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:32.181535 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.181957 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:32.182485 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:32.182719 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:32.182913 1606769 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa Username:docker}
	I0630 15:43:32.184326 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.184798 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:32.184828 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:32.185063 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:43:32.185242 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:43:32.185477 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:43:32.185648 1606769 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa Username:docker}
	I0630 15:43:32.302765 1606769 ssh_runner.go:195] Run: systemctl --version
	I0630 15:43:32.309356 1606769 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:43:32.474070 1606769 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:43:32.480541 1606769 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:43:32.480608 1606769 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:43:32.499842 1606769 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:43:32.499875 1606769 start.go:495] detecting cgroup driver to use...
	I0630 15:43:32.499973 1606769 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:43:32.518984 1606769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:43:32.535833 1606769 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:43:32.535912 1606769 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:43:32.551757 1606769 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:43:32.568087 1606769 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:43:32.709741 1606769 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:43:32.865344 1606769 docker.go:246] disabling docker service ...
	I0630 15:43:32.865465 1606769 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:43:32.881555 1606769 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:43:32.897659 1606769 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:43:33.099297 1606769 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:43:33.239031 1606769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:43:33.254240 1606769 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:43:33.278180 1606769 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0630 15:43:33.278246 1606769 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:33.290026 1606769 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:43:33.290105 1606769 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:33.303601 1606769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:33.315971 1606769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:43:33.328394 1606769 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:43:33.341128 1606769 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:43:33.353461 1606769 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:43:33.353548 1606769 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:43:33.368213 1606769 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:43:33.379369 1606769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:43:33.535823 1606769 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:43:33.654671 1606769 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:43:33.654754 1606769 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:43:33.660436 1606769 start.go:563] Will wait 60s for crictl version
	I0630 15:43:33.660498 1606769 ssh_runner.go:195] Run: which crictl
	I0630 15:43:33.666154 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:43:33.714470 1606769 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:43:33.714556 1606769 ssh_runner.go:195] Run: crio --version
	I0630 15:43:33.747822 1606769 ssh_runner.go:195] Run: crio --version
	I0630 15:43:33.784005 1606769 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0630 15:43:33.785451 1606769 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetIP
	I0630 15:43:33.789320 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:33.789706 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:43:22 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:43:33.789737 1606769 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:43:33.789993 1606769 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0630 15:43:33.794519 1606769 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:43:33.809019 1606769 kubeadm.go:875] updating cluster {Name:old-k8s-version-836310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:ol
d-k8s-version-836310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:43:33.809145 1606769 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0630 15:43:33.809206 1606769 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:43:33.844604 1606769 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0630 15:43:33.844672 1606769 ssh_runner.go:195] Run: which lz4
	I0630 15:43:33.850511 1606769 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:43:33.856193 1606769 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:43:33.856238 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0630 15:43:35.649534 1606769 crio.go:462] duration metric: took 1.799067288s to copy over tarball
	I0630 15:43:35.649619 1606769 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:43:37.871919 1606769 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.222267116s)
	I0630 15:43:37.871949 1606769 crio.go:469] duration metric: took 2.222381334s to extract the tarball
	I0630 15:43:37.871959 1606769 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:43:37.914121 1606769 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:43:37.958392 1606769 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0630 15:43:37.958424 1606769 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0630 15:43:37.958532 1606769 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:43:37.958579 1606769 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0630 15:43:37.958594 1606769 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:43:37.958517 1606769 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:43:37.958628 1606769 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:43:37.958532 1606769 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0630 15:43:37.958556 1606769 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0630 15:43:37.958770 1606769 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:43:37.960423 1606769 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0630 15:43:37.960444 1606769 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:43:37.960452 1606769 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0630 15:43:37.960459 1606769 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0630 15:43:37.960426 1606769 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:43:37.960525 1606769 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:43:37.960530 1606769 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:43:37.960840 1606769 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:43:38.189531 1606769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0630 15:43:38.224070 1606769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0630 15:43:38.226478 1606769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:43:38.231161 1606769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:43:38.238984 1606769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0630 15:43:38.251032 1606769 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0630 15:43:38.251078 1606769 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0630 15:43:38.251131 1606769 ssh_runner.go:195] Run: which crictl
	I0630 15:43:38.252277 1606769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:43:38.275392 1606769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:43:38.361554 1606769 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0630 15:43:38.361610 1606769 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0630 15:43:38.361663 1606769 ssh_runner.go:195] Run: which crictl
	I0630 15:43:38.379800 1606769 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0630 15:43:38.379854 1606769 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:43:38.379913 1606769 ssh_runner.go:195] Run: which crictl
	I0630 15:43:38.388445 1606769 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0630 15:43:38.388529 1606769 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:43:38.388614 1606769 ssh_runner.go:195] Run: which crictl
	I0630 15:43:38.405851 1606769 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0630 15:43:38.405907 1606769 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0630 15:43:38.405956 1606769 ssh_runner.go:195] Run: which crictl
	I0630 15:43:38.405964 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0630 15:43:38.428101 1606769 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0630 15:43:38.428150 1606769 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:43:38.428200 1606769 ssh_runner.go:195] Run: which crictl
	I0630 15:43:38.432653 1606769 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0630 15:43:38.432709 1606769 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:43:38.432733 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0630 15:43:38.432757 1606769 ssh_runner.go:195] Run: which crictl
	I0630 15:43:38.432790 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:43:38.432905 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:43:38.468414 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0630 15:43:38.468471 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0630 15:43:38.468527 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:43:38.468588 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:43:38.542529 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0630 15:43:38.554867 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:43:38.554948 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:43:38.686834 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0630 15:43:38.686880 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0630 15:43:38.686834 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:43:38.686919 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:43:38.708387 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0630 15:43:38.708459 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:43:38.722736 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:43:38.855199 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0630 15:43:38.855239 1606769 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0630 15:43:38.855309 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:43:38.855335 1606769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:43:38.868169 1606769 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0630 15:43:38.880473 1606769 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0630 15:43:38.892459 1606769 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0630 15:43:38.956024 1606769 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0630 15:43:38.956098 1606769 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0630 15:43:38.956110 1606769 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0630 15:43:39.275711 1606769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:43:39.425843 1606769 cache_images.go:92] duration metric: took 1.467392045s to LoadCachedImages
	W0630 15:43:39.425979 1606769 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0630 15:43:39.426003 1606769 kubeadm.go:926] updating node { 192.168.61.88 8443 v1.20.0 crio true true} ...
	I0630 15:43:39.426210 1606769 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-836310 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-836310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 15:43:39.426304 1606769 ssh_runner.go:195] Run: crio config
	I0630 15:43:39.488387 1606769 cni.go:84] Creating CNI manager for ""
	I0630 15:43:39.488419 1606769 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:43:39.488428 1606769 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:43:39.488448 1606769 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.88 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-836310 NodeName:old-k8s-version-836310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0630 15:43:39.488648 1606769 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-836310"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:43:39.488737 1606769 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0630 15:43:39.500827 1606769 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:43:39.500912 1606769 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:43:39.512946 1606769 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0630 15:43:39.535390 1606769 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:43:39.557997 1606769 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0630 15:43:39.580582 1606769 ssh_runner.go:195] Run: grep 192.168.61.88	control-plane.minikube.internal$ /etc/hosts
	I0630 15:43:39.584959 1606769 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:43:39.599578 1606769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:43:39.756567 1606769 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:43:39.787752 1606769 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310 for IP: 192.168.61.88
	I0630 15:43:39.787788 1606769 certs.go:194] generating shared ca certs ...
	I0630 15:43:39.787814 1606769 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:39.788048 1606769 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:43:39.788111 1606769 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:43:39.788125 1606769 certs.go:256] generating profile certs ...
	I0630 15:43:39.788212 1606769 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/client.key
	I0630 15:43:39.788249 1606769 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/client.crt with IP's: []
	I0630 15:43:40.312951 1606769 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/client.crt ...
	I0630 15:43:40.312989 1606769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/client.crt: {Name:mk1aefc3e6c30ddf6b6303202ef5e78380cd8d9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:40.313209 1606769 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/client.key ...
	I0630 15:43:40.313232 1606769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/client.key: {Name:mk1ce7a3569c6b3226fa90571413a9cd0e62e54a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:40.313322 1606769 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.key.326a3c9b
	I0630 15:43:40.313341 1606769 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.crt.326a3c9b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.88]
	I0630 15:43:40.334323 1606769 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.crt.326a3c9b ...
	I0630 15:43:40.334365 1606769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.crt.326a3c9b: {Name:mk9b8af628c8a51b1c1549191a80be3dcbc79a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:40.334567 1606769 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.key.326a3c9b ...
	I0630 15:43:40.334584 1606769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.key.326a3c9b: {Name:mke52a101ceb08fd2b2478ffe61ee4c49b6a6117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:40.334663 1606769 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.crt.326a3c9b -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.crt
	I0630 15:43:40.334777 1606769 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.key.326a3c9b -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.key
	I0630 15:43:40.334864 1606769 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.key
	I0630 15:43:40.334884 1606769 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.crt with IP's: []
	I0630 15:43:40.574941 1606769 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.crt ...
	I0630 15:43:40.574975 1606769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.crt: {Name:mkf6f7be7b17f15ed61c3907aa2dd7f66fddd7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:40.575154 1606769 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.key ...
	I0630 15:43:40.575168 1606769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.key: {Name:mk21df5070fa2ef1a28a6e31e6caf0b6e214630d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:43:40.575494 1606769 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:43:40.575552 1606769 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:43:40.575567 1606769 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:43:40.575604 1606769 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:43:40.575630 1606769 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:43:40.575655 1606769 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:43:40.575697 1606769 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:43:40.576432 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:43:40.611272 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:43:40.651690 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:43:40.686623 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:43:40.729867 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0630 15:43:40.765344 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 15:43:40.796511 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:43:40.828648 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 15:43:40.860003 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:43:40.888402 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:43:40.924915 1606769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:43:40.958021 1606769 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:43:40.979706 1606769 ssh_runner.go:195] Run: openssl version
	I0630 15:43:40.986311 1606769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:43:40.999668 1606769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:43:41.005305 1606769 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:43:41.005385 1606769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:43:41.013674 1606769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:43:41.027066 1606769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:43:41.041857 1606769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:43:41.047230 1606769 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:43:41.047381 1606769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:43:41.054642 1606769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:43:41.068323 1606769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:43:41.082367 1606769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:43:41.087853 1606769 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:43:41.087925 1606769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:43:41.095376 1606769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:43:41.108438 1606769 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:43:41.113165 1606769 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:43:41.113240 1606769 kubeadm.go:392] StartCluster: {Name:old-k8s-version-836310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k
8s-version-836310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:43:41.113338 1606769 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:43:41.113451 1606769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:43:41.154388 1606769 cri.go:89] found id: ""
	I0630 15:43:41.154467 1606769 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:43:41.167204 1606769 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:43:41.179807 1606769 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:43:41.191273 1606769 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:43:41.191303 1606769 kubeadm.go:157] found existing configuration files:
	
	I0630 15:43:41.191367 1606769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:43:41.202451 1606769 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:43:41.202536 1606769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:43:41.213833 1606769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:43:41.228804 1606769 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:43:41.228888 1606769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:43:41.241569 1606769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:43:41.253805 1606769 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:43:41.253881 1606769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:43:41.265137 1606769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:43:41.275696 1606769 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:43:41.275770 1606769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:43:41.287731 1606769 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:43:41.551126 1606769 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:45:39.792121 1606769 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:45:39.792260 1606769 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0630 15:45:39.795087 1606769 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:45:39.795197 1606769 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:45:39.795313 1606769 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:45:39.795462 1606769 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:45:39.795608 1606769 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:45:39.795702 1606769 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:45:39.797704 1606769 out.go:235]   - Generating certificates and keys ...
	I0630 15:45:39.797793 1606769 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:45:39.797855 1606769 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:45:39.797947 1606769 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 15:45:39.798031 1606769 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 15:45:39.798117 1606769 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 15:45:39.798190 1606769 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 15:45:39.798267 1606769 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 15:45:39.798438 1606769 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-836310] and IPs [192.168.61.88 127.0.0.1 ::1]
	I0630 15:45:39.798525 1606769 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 15:45:39.798669 1606769 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-836310] and IPs [192.168.61.88 127.0.0.1 ::1]
	I0630 15:45:39.798763 1606769 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 15:45:39.798847 1606769 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 15:45:39.798915 1606769 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 15:45:39.798992 1606769 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:45:39.799067 1606769 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:45:39.799143 1606769 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:45:39.799240 1606769 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:45:39.799328 1606769 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:45:39.799447 1606769 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:45:39.799582 1606769 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:45:39.799637 1606769 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:45:39.799723 1606769 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:45:39.801537 1606769 out.go:235]   - Booting up control plane ...
	I0630 15:45:39.801660 1606769 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:45:39.801772 1606769 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:45:39.801849 1606769 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:45:39.801940 1606769 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:45:39.802190 1606769 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:45:39.802259 1606769 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:45:39.802348 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:45:39.802542 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:45:39.802662 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:45:39.802959 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:45:39.803048 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:45:39.803220 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:45:39.803301 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:45:39.803503 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:45:39.803616 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:45:39.803903 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:45:39.803918 1606769 kubeadm.go:310] 
	I0630 15:45:39.803961 1606769 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:45:39.804016 1606769 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:45:39.804024 1606769 kubeadm.go:310] 
	I0630 15:45:39.804068 1606769 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:45:39.804127 1606769 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:45:39.804278 1606769 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:45:39.804293 1606769 kubeadm.go:310] 
	I0630 15:45:39.804445 1606769 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:45:39.804493 1606769 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:45:39.804537 1606769 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:45:39.804545 1606769 kubeadm.go:310] 
	I0630 15:45:39.804748 1606769 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:45:39.804862 1606769 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:45:39.804872 1606769 kubeadm.go:310] 
	I0630 15:45:39.805001 1606769 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:45:39.805143 1606769 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:45:39.805260 1606769 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:45:39.805382 1606769 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:45:39.805420 1606769 kubeadm.go:310] 
	W0630 15:45:39.805589 1606769 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-836310] and IPs [192.168.61.88 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-836310] and IPs [192.168.61.88 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-836310] and IPs [192.168.61.88 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-836310] and IPs [192.168.61.88 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0630 15:45:39.805648 1606769 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:45:41.310767 1606769 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.50507331s)
	I0630 15:45:41.310866 1606769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:45:41.330754 1606769 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:45:41.344307 1606769 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:45:41.344336 1606769 kubeadm.go:157] found existing configuration files:
	
	I0630 15:45:41.344479 1606769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:45:41.356249 1606769 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:45:41.356339 1606769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:45:41.369621 1606769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:45:41.381458 1606769 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:45:41.381525 1606769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:45:41.394133 1606769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:45:41.405990 1606769 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:45:41.406059 1606769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:45:41.418340 1606769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:45:41.429264 1606769 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:45:41.429336 1606769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:45:41.441487 1606769 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:45:41.660594 1606769 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:47:37.876526 1606769 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:47:37.876658 1606769 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0630 15:47:37.879798 1606769 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:47:37.879940 1606769 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:47:37.880049 1606769 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:47:37.880176 1606769 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:47:37.880315 1606769 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:47:37.880469 1606769 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:47:37.882821 1606769 out.go:235]   - Generating certificates and keys ...
	I0630 15:47:37.882947 1606769 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:47:37.883070 1606769 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:47:37.883192 1606769 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:47:37.883261 1606769 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:47:37.883348 1606769 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:47:37.883404 1606769 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:47:37.883478 1606769 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:47:37.883549 1606769 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:47:37.883636 1606769 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:47:37.883726 1606769 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:47:37.883775 1606769 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:47:37.883837 1606769 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:47:37.883897 1606769 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:47:37.883957 1606769 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:47:37.884035 1606769 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:47:37.884102 1606769 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:47:37.884220 1606769 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:47:37.884329 1606769 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:47:37.884377 1606769 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:47:37.884454 1606769 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:47:37.886550 1606769 out.go:235]   - Booting up control plane ...
	I0630 15:47:37.886707 1606769 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:47:37.886846 1606769 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:47:37.886917 1606769 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:47:37.886992 1606769 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:47:37.887207 1606769 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:47:37.887280 1606769 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:47:37.887382 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:47:37.887580 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:47:37.887685 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:47:37.887854 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:47:37.887932 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:47:37.888221 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:47:37.888323 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:47:37.888547 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:47:37.888622 1606769 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:47:37.888849 1606769 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:47:37.888868 1606769 kubeadm.go:310] 
	I0630 15:47:37.888900 1606769 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:47:37.888942 1606769 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:47:37.888948 1606769 kubeadm.go:310] 
	I0630 15:47:37.888976 1606769 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:47:37.889024 1606769 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:47:37.889113 1606769 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:47:37.889122 1606769 kubeadm.go:310] 
	I0630 15:47:37.889208 1606769 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:47:37.889268 1606769 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:47:37.889347 1606769 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:47:37.889364 1606769 kubeadm.go:310] 
	I0630 15:47:37.889541 1606769 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:47:37.889616 1606769 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:47:37.889623 1606769 kubeadm.go:310] 
	I0630 15:47:37.889757 1606769 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:47:37.889833 1606769 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:47:37.889894 1606769 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:47:37.889989 1606769 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:47:37.890079 1606769 kubeadm.go:310] 
	I0630 15:47:37.890095 1606769 kubeadm.go:394] duration metric: took 3m56.776862472s to StartCluster
	I0630 15:47:37.890151 1606769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:47:37.890227 1606769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:47:37.941820 1606769 cri.go:89] found id: ""
	I0630 15:47:37.941856 1606769 logs.go:282] 0 containers: []
	W0630 15:47:37.941864 1606769 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:47:37.941871 1606769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:47:37.941977 1606769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:47:37.982091 1606769 cri.go:89] found id: ""
	I0630 15:47:37.982132 1606769 logs.go:282] 0 containers: []
	W0630 15:47:37.982144 1606769 logs.go:284] No container was found matching "etcd"
	I0630 15:47:37.982153 1606769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:47:37.982233 1606769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:47:38.023936 1606769 cri.go:89] found id: ""
	I0630 15:47:38.023993 1606769 logs.go:282] 0 containers: []
	W0630 15:47:38.024006 1606769 logs.go:284] No container was found matching "coredns"
	I0630 15:47:38.024016 1606769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:47:38.024129 1606769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:47:38.065252 1606769 cri.go:89] found id: ""
	I0630 15:47:38.065289 1606769 logs.go:282] 0 containers: []
	W0630 15:47:38.065299 1606769 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:47:38.065306 1606769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:47:38.065377 1606769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:47:38.110803 1606769 cri.go:89] found id: ""
	I0630 15:47:38.110855 1606769 logs.go:282] 0 containers: []
	W0630 15:47:38.110869 1606769 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:47:38.110879 1606769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:47:38.110962 1606769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:47:38.151349 1606769 cri.go:89] found id: ""
	I0630 15:47:38.151387 1606769 logs.go:282] 0 containers: []
	W0630 15:47:38.151399 1606769 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:47:38.151409 1606769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:47:38.151490 1606769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:47:38.199992 1606769 cri.go:89] found id: ""
	I0630 15:47:38.200028 1606769 logs.go:282] 0 containers: []
	W0630 15:47:38.200040 1606769 logs.go:284] No container was found matching "kindnet"
	I0630 15:47:38.200055 1606769 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:47:38.200071 1606769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:47:38.332163 1606769 logs.go:123] Gathering logs for container status ...
	I0630 15:47:38.332222 1606769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:47:38.392420 1606769 logs.go:123] Gathering logs for kubelet ...
	I0630 15:47:38.392456 1606769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:47:38.475680 1606769 logs.go:123] Gathering logs for dmesg ...
	I0630 15:47:38.475757 1606769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:47:38.498087 1606769 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:47:38.498130 1606769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:47:38.601850 1606769 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0630 15:47:38.601947 1606769 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0630 15:47:38.602044 1606769 out.go:270] * 
	* 
	W0630 15:47:38.602143 1606769 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:47:38.602167 1606769 out.go:270] * 
	* 
	W0630 15:47:38.603957 1606769 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0630 15:47:38.607922 1606769 out.go:201] 
	W0630 15:47:38.609377 1606769 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:47:38.609475 1606769 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0630 15:47:38.609510 1606769 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0630 15:47:38.611401 1606769 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-836310 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 6 (318.057337ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0630 15:47:38.980046 1610129 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-836310" does not appear in /home/jenkins/minikube-integration/20991-1550299/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-836310" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (273.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-836310 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-836310 create -f testdata/busybox.yaml: exit status 1 (66.583456ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-836310" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-836310 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 6 (308.573305ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0630 15:47:39.367041 1610165 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-836310" does not appear in /home/jenkins/minikube-integration/20991-1550299/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-836310" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 6 (284.338692ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0630 15:47:39.657516 1610194 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-836310" does not appear in /home/jenkins/minikube-integration/20991-1550299/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-836310" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (85.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-836310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-836310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m25.048053351s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-836310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-836310 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-836310 describe deploy/metrics-server -n kube-system: exit status 1 (52.21797ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-836310" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-836310 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 6 (279.056814ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0630 15:49:05.032935 1612085 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-836310" does not appear in /home/jenkins/minikube-integration/20991-1550299/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-836310" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (85.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (544.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-836310 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-836310 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (9m2.650548385s)

                                                
                                                
-- stdout --
	* [old-k8s-version-836310] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.33.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.33.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-836310" primary control-plane node in "old-k8s-version-836310" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-836310" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:49:07.683729 1612198 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:49:07.684017 1612198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:49:07.684029 1612198 out.go:358] Setting ErrFile to fd 2...
	I0630 15:49:07.684033 1612198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:49:07.684222 1612198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:49:07.684818 1612198 out.go:352] Setting JSON to false
	I0630 15:49:07.685985 1612198 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":34240,"bootTime":1751264308,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:49:07.686115 1612198 start.go:140] virtualization: kvm guest
	I0630 15:49:07.688730 1612198 out.go:177] * [old-k8s-version-836310] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:49:07.690576 1612198 notify.go:220] Checking for updates...
	I0630 15:49:07.690586 1612198 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:49:07.692491 1612198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:49:07.695071 1612198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:49:07.696466 1612198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:49:07.698146 1612198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:49:07.699660 1612198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:49:07.701862 1612198 config.go:182] Loaded profile config "old-k8s-version-836310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0630 15:49:07.702530 1612198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:49:07.702636 1612198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:49:07.720781 1612198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42613
	I0630 15:49:07.721384 1612198 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:49:07.722135 1612198 main.go:141] libmachine: Using API Version  1
	I0630 15:49:07.722168 1612198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:49:07.722642 1612198 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:49:07.722854 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:49:07.724994 1612198 out.go:177] * Kubernetes 1.33.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.33.2
	I0630 15:49:07.726429 1612198 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:49:07.726807 1612198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:49:07.726860 1612198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:49:07.744360 1612198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0630 15:49:07.745001 1612198 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:49:07.745595 1612198 main.go:141] libmachine: Using API Version  1
	I0630 15:49:07.745632 1612198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:49:07.746051 1612198 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:49:07.746306 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:49:07.790536 1612198 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 15:49:07.791942 1612198 start.go:304] selected driver: kvm2
	I0630 15:49:07.791967 1612198 start.go:908] validating driver "kvm2" against &{Name:old-k8s-version-836310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0
ClusterName:old-k8s-version-836310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:49:07.792145 1612198 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:49:07.793492 1612198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:49:07.793588 1612198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:49:07.812598 1612198 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:49:07.813056 1612198 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:49:07.813090 1612198 cni.go:84] Creating CNI manager for ""
	I0630 15:49:07.813134 1612198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:49:07.813172 1612198 start.go:347] cluster config:
	{Name:old-k8s-version-836310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-836310 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:49:07.813282 1612198 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:49:07.815264 1612198 out.go:177] * Starting "old-k8s-version-836310" primary control-plane node in "old-k8s-version-836310" cluster
	I0630 15:49:07.816473 1612198 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0630 15:49:07.816537 1612198 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0630 15:49:07.816547 1612198 cache.go:56] Caching tarball of preloaded images
	I0630 15:49:07.816665 1612198 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:49:07.816682 1612198 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0630 15:49:07.816801 1612198 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/config.json ...
	I0630 15:49:07.817029 1612198 start.go:360] acquireMachinesLock for old-k8s-version-836310: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:49:34.303814 1612198 start.go:364] duration metric: took 26.48675442s to acquireMachinesLock for "old-k8s-version-836310"
	I0630 15:49:34.303882 1612198 start.go:96] Skipping create...Using existing machine configuration
	I0630 15:49:34.303894 1612198 fix.go:54] fixHost starting: 
	I0630 15:49:34.304348 1612198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:49:34.304416 1612198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:49:34.326361 1612198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I0630 15:49:34.326918 1612198 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:49:34.327506 1612198 main.go:141] libmachine: Using API Version  1
	I0630 15:49:34.327535 1612198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:49:34.327923 1612198 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:49:34.328126 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:49:34.328310 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetState
	I0630 15:49:34.330436 1612198 fix.go:112] recreateIfNeeded on old-k8s-version-836310: state=Stopped err=<nil>
	I0630 15:49:34.330494 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	W0630 15:49:34.330664 1612198 fix.go:138] unexpected machine state, will restart: <nil>
	I0630 15:49:34.332698 1612198 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-836310" ...
	I0630 15:49:34.334077 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .Start
	I0630 15:49:34.334352 1612198 main.go:141] libmachine: (old-k8s-version-836310) starting domain...
	I0630 15:49:34.334375 1612198 main.go:141] libmachine: (old-k8s-version-836310) ensuring networks are active...
	I0630 15:49:34.335434 1612198 main.go:141] libmachine: (old-k8s-version-836310) Ensuring network default is active
	I0630 15:49:34.335946 1612198 main.go:141] libmachine: (old-k8s-version-836310) Ensuring network mk-old-k8s-version-836310 is active
	I0630 15:49:34.336528 1612198 main.go:141] libmachine: (old-k8s-version-836310) getting domain XML...
	I0630 15:49:34.337536 1612198 main.go:141] libmachine: (old-k8s-version-836310) creating domain...
	I0630 15:49:35.944088 1612198 main.go:141] libmachine: (old-k8s-version-836310) waiting for IP...
	I0630 15:49:35.945769 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:35.946466 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:35.946539 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:35.946442 1612598 retry.go:31] will retry after 261.790139ms: waiting for domain to come up
	I0630 15:49:36.210605 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:36.211349 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:36.211379 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:36.211335 1612598 retry.go:31] will retry after 306.360216ms: waiting for domain to come up
	I0630 15:49:36.520322 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:36.521061 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:36.521106 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:36.521017 1612598 retry.go:31] will retry after 438.84461ms: waiting for domain to come up
	I0630 15:49:36.961996 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:36.962785 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:36.962819 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:36.962754 1612598 retry.go:31] will retry after 531.976069ms: waiting for domain to come up
	I0630 15:49:37.496664 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:37.497253 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:37.497285 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:37.497196 1612598 retry.go:31] will retry after 734.723958ms: waiting for domain to come up
	I0630 15:49:38.234245 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:38.234985 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:38.235012 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:38.234944 1612598 retry.go:31] will retry after 782.00344ms: waiting for domain to come up
	I0630 15:49:39.019127 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:39.019785 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:39.019812 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:39.019694 1612598 retry.go:31] will retry after 1.053168192s: waiting for domain to come up
	I0630 15:49:40.074418 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:40.075505 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:40.075528 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:40.075406 1612598 retry.go:31] will retry after 1.125238832s: waiting for domain to come up
	I0630 15:49:41.206736 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:41.207481 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:41.207502 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:41.207388 1612598 retry.go:31] will retry after 1.16705075s: waiting for domain to come up
	I0630 15:49:42.376393 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:42.377161 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:42.377191 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:42.377119 1612598 retry.go:31] will retry after 2.233144067s: waiting for domain to come up
	I0630 15:49:44.618683 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:44.619341 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:44.619369 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:44.619230 1612598 retry.go:31] will retry after 2.032404699s: waiting for domain to come up
	I0630 15:49:46.654692 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:46.655396 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:46.655419 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:46.655306 1612598 retry.go:31] will retry after 2.450778094s: waiting for domain to come up
	I0630 15:49:49.107962 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:49.108471 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:49.108575 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:49.108477 1612598 retry.go:31] will retry after 4.17941326s: waiting for domain to come up
	I0630 15:49:53.291166 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:53.291889 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | unable to find current IP address of domain old-k8s-version-836310 in network mk-old-k8s-version-836310
	I0630 15:49:53.291921 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | I0630 15:49:53.291845 1612598 retry.go:31] will retry after 5.329417724s: waiting for domain to come up
	I0630 15:49:58.623305 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:58.659576 1612198 main.go:141] libmachine: (old-k8s-version-836310) found domain IP: 192.168.61.88
	I0630 15:49:58.659606 1612198 main.go:141] libmachine: (old-k8s-version-836310) reserving static IP address...
	I0630 15:49:58.659644 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has current primary IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:58.662830 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "old-k8s-version-836310", mac: "52:54:00:5d:f3:de", ip: "192.168.61.88"} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:58.662887 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | skip adding static IP to network mk-old-k8s-version-836310 - found existing host DHCP lease matching {name: "old-k8s-version-836310", mac: "52:54:00:5d:f3:de", ip: "192.168.61.88"}
	I0630 15:49:58.662904 1612198 main.go:141] libmachine: (old-k8s-version-836310) reserved static IP address 192.168.61.88 for domain old-k8s-version-836310
	I0630 15:49:58.662926 1612198 main.go:141] libmachine: (old-k8s-version-836310) waiting for SSH...
	I0630 15:49:58.662942 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | Getting to WaitForSSH function...
	I0630 15:49:58.667678 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:58.668354 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:58.668388 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:58.668733 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | Using SSH client type: external
	I0630 15:49:58.668765 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa (-rw-------)
	I0630 15:49:58.668796 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:49:58.668805 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | About to run SSH command:
	I0630 15:49:58.668816 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | exit 0
	I0630 15:49:58.801703 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | SSH cmd err, output: <nil>: 
	I0630 15:49:59.235427 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetConfigRaw
	I0630 15:49:59.236191 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetIP
	I0630 15:49:59.239955 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.240512 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:59.240559 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.240811 1612198 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/config.json ...
	I0630 15:49:59.241068 1612198 machine.go:93] provisionDockerMachine start ...
	I0630 15:49:59.241098 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:49:59.241329 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:49:59.244639 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.245168 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:59.245195 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.245445 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:49:59.245693 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:49:59.245894 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:49:59.246042 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:49:59.246239 1612198 main.go:141] libmachine: Using SSH client type: native
	I0630 15:49:59.246566 1612198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:49:59.246586 1612198 main.go:141] libmachine: About to run SSH command:
	hostname
	I0630 15:49:59.366015 1612198 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0630 15:49:59.366048 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetMachineName
	I0630 15:49:59.366357 1612198 buildroot.go:166] provisioning hostname "old-k8s-version-836310"
	I0630 15:49:59.366390 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetMachineName
	I0630 15:49:59.366987 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:49:59.371285 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.371819 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:59.371849 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.372120 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:49:59.372374 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:49:59.372601 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:49:59.372739 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:49:59.372996 1612198 main.go:141] libmachine: Using SSH client type: native
	I0630 15:49:59.373343 1612198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:49:59.373366 1612198 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-836310 && echo "old-k8s-version-836310" | sudo tee /etc/hostname
	I0630 15:49:59.511554 1612198 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-836310
	
	I0630 15:49:59.511581 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:49:59.515685 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.516153 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:59.516187 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.516387 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:49:59.516580 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:49:59.516759 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:49:59.516946 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:49:59.517226 1612198 main.go:141] libmachine: Using SSH client type: native
	I0630 15:49:59.517565 1612198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:49:59.517587 1612198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-836310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-836310/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-836310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:49:59.651566 1612198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:49:59.651608 1612198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:49:59.651636 1612198 buildroot.go:174] setting up certificates
	I0630 15:49:59.651662 1612198 provision.go:84] configureAuth start
	I0630 15:49:59.651683 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetMachineName
	I0630 15:49:59.652003 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetIP
	I0630 15:49:59.655697 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.656095 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:59.656130 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.656402 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:49:59.659110 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.659524 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:59.659546 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.659724 1612198 provision.go:143] copyHostCerts
	I0630 15:49:59.659793 1612198 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:49:59.659816 1612198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:49:59.659885 1612198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:49:59.660020 1612198 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:49:59.660042 1612198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:49:59.660078 1612198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:49:59.660176 1612198 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:49:59.660187 1612198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:49:59.660221 1612198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:49:59.660287 1612198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-836310 san=[127.0.0.1 192.168.61.88 localhost minikube old-k8s-version-836310]
	I0630 15:49:59.690101 1612198 provision.go:177] copyRemoteCerts
	I0630 15:49:59.690174 1612198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:49:59.690206 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:49:59.693840 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.694258 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:59.694288 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.694508 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:49:59.694768 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:49:59.694956 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:49:59.695126 1612198 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa Username:docker}
	I0630 15:49:59.786280 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:49:59.819266 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0630 15:49:59.858611 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:49:59.895820 1612198 provision.go:87] duration metric: took 244.133486ms to configureAuth
	I0630 15:49:59.895857 1612198 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:49:59.896130 1612198 config.go:182] Loaded profile config "old-k8s-version-836310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0630 15:49:59.896227 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:49:59.900283 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.900660 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:49:59.900696 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:49:59.900885 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:49:59.901105 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:49:59.901263 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:49:59.901424 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:49:59.901624 1612198 main.go:141] libmachine: Using SSH client type: native
	I0630 15:49:59.901855 1612198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:49:59.901871 1612198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:50:00.172580 1612198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:50:00.172653 1612198 machine.go:96] duration metric: took 931.562537ms to provisionDockerMachine
	I0630 15:50:00.172676 1612198 start.go:293] postStartSetup for "old-k8s-version-836310" (driver="kvm2")
	I0630 15:50:00.172692 1612198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:50:00.172719 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:50:00.173147 1612198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:50:00.173187 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:50:00.176766 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.177134 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:50:00.177165 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.177382 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:50:00.177623 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:50:00.177830 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:50:00.177986 1612198 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa Username:docker}
	I0630 15:50:00.267062 1612198 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:50:00.272183 1612198 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:50:00.272217 1612198 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:50:00.272298 1612198 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:50:00.272382 1612198 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:50:00.272489 1612198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:50:00.283544 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:50:00.314841 1612198 start.go:296] duration metric: took 142.144195ms for postStartSetup
	I0630 15:50:00.314897 1612198 fix.go:56] duration metric: took 26.011002324s for fixHost
	I0630 15:50:00.314928 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:50:00.317796 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.318403 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:50:00.318434 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.318626 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:50:00.318886 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:50:00.319076 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:50:00.319359 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:50:00.319569 1612198 main.go:141] libmachine: Using SSH client type: native
	I0630 15:50:00.319784 1612198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0630 15:50:00.319794 1612198 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:50:00.434929 1612198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298600.418288486
	
	I0630 15:50:00.434966 1612198 fix.go:216] guest clock: 1751298600.418288486
	I0630 15:50:00.434977 1612198 fix.go:229] Guest: 2025-06-30 15:50:00.418288486 +0000 UTC Remote: 2025-06-30 15:50:00.314903555 +0000 UTC m=+52.674051787 (delta=103.384931ms)
	I0630 15:50:00.435027 1612198 fix.go:200] guest clock delta is within tolerance: 103.384931ms
	I0630 15:50:00.435035 1612198 start.go:83] releasing machines lock for "old-k8s-version-836310", held for 26.131178118s
	I0630 15:50:00.435068 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:50:00.435425 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetIP
	I0630 15:50:00.438994 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.439435 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:50:00.439471 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.439771 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:50:00.440388 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:50:00.440625 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .DriverName
	I0630 15:50:00.440749 1612198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:50:00.440824 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:50:00.440850 1612198 ssh_runner.go:195] Run: cat /version.json
	I0630 15:50:00.440878 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHHostname
	I0630 15:50:00.444123 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.444604 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:50:00.444635 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.444653 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.444983 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:50:00.445018 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:50:00.445053 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:00.445229 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:50:00.445360 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHPort
	I0630 15:50:00.445440 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:50:00.445574 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHKeyPath
	I0630 15:50:00.445591 1612198 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa Username:docker}
	I0630 15:50:00.445719 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetSSHUsername
	I0630 15:50:00.445880 1612198 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/old-k8s-version-836310/id_rsa Username:docker}
	I0630 15:50:00.580557 1612198 ssh_runner.go:195] Run: systemctl --version
	I0630 15:50:00.587811 1612198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:50:00.739111 1612198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:50:00.745996 1612198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:50:00.746078 1612198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:50:00.771486 1612198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:50:00.771520 1612198 start.go:495] detecting cgroup driver to use...
	I0630 15:50:00.771593 1612198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:50:00.796624 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:50:00.815567 1612198 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:50:00.815639 1612198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:50:00.833726 1612198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:50:00.859224 1612198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:50:01.021092 1612198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:50:01.209918 1612198 docker.go:246] disabling docker service ...
	I0630 15:50:01.210020 1612198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:50:01.233389 1612198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:50:01.251151 1612198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:50:01.471026 1612198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:50:01.642614 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:50:01.666159 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:50:01.696802 1612198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0630 15:50:01.696915 1612198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:50:01.711316 1612198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:50:01.711401 1612198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:50:01.727805 1612198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:50:01.742041 1612198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:50:01.754590 1612198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:50:01.768369 1612198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:50:01.779128 1612198 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:50:01.779200 1612198 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:50:01.794768 1612198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:50:01.809021 1612198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:50:01.960515 1612198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:50:02.134766 1612198 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:50:02.134846 1612198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:50:02.142063 1612198 start.go:563] Will wait 60s for crictl version
	I0630 15:50:02.142172 1612198 ssh_runner.go:195] Run: which crictl
	I0630 15:50:02.147687 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:50:02.201744 1612198 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:50:02.201863 1612198 ssh_runner.go:195] Run: crio --version
	I0630 15:50:02.243820 1612198 ssh_runner.go:195] Run: crio --version
	I0630 15:50:02.277906 1612198 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0630 15:50:02.279016 1612198 main.go:141] libmachine: (old-k8s-version-836310) Calling .GetIP
	I0630 15:50:02.282201 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:02.282693 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:f3:de", ip: ""} in network mk-old-k8s-version-836310: {Iface:virbr3 ExpiryTime:2025-06-30 16:49:48 +0000 UTC Type:0 Mac:52:54:00:5d:f3:de Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:old-k8s-version-836310 Clientid:01:52:54:00:5d:f3:de}
	I0630 15:50:02.282747 1612198 main.go:141] libmachine: (old-k8s-version-836310) DBG | domain old-k8s-version-836310 has defined IP address 192.168.61.88 and MAC address 52:54:00:5d:f3:de in network mk-old-k8s-version-836310
	I0630 15:50:02.283011 1612198 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0630 15:50:02.287903 1612198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:50:02.302833 1612198 kubeadm.go:875] updating cluster {Name:old-k8s-version-836310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:ol
d-k8s-version-836310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:50:02.303006 1612198 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0630 15:50:02.303102 1612198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:50:02.403286 1612198 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0630 15:50:02.403374 1612198 ssh_runner.go:195] Run: which lz4
	I0630 15:50:02.408118 1612198 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:50:02.412940 1612198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:50:02.412977 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0630 15:50:04.224330 1612198 crio.go:462] duration metric: took 1.816251248s to copy over tarball
	I0630 15:50:04.224418 1612198 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:50:07.072039 1612198 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.847592108s)
	I0630 15:50:07.072094 1612198 crio.go:469] duration metric: took 2.847729132s to extract the tarball
	I0630 15:50:07.072104 1612198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:50:07.116001 1612198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:50:07.165160 1612198 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0630 15:50:07.165211 1612198 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0630 15:50:07.165348 1612198 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:50:07.165377 1612198 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:50:07.165396 1612198 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:50:07.165491 1612198 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0630 15:50:07.165503 1612198 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0630 15:50:07.165635 1612198 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:50:07.165696 1612198 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0630 15:50:07.165725 1612198 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:50:07.170745 1612198 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:50:07.170766 1612198 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0630 15:50:07.170933 1612198 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:50:07.170976 1612198 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0630 15:50:07.171311 1612198 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:50:07.171462 1612198 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0630 15:50:07.171483 1612198 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:50:07.171596 1612198 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:50:07.370946 1612198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0630 15:50:07.390561 1612198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0630 15:50:07.418064 1612198 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0630 15:50:07.418135 1612198 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0630 15:50:07.418192 1612198 ssh_runner.go:195] Run: which crictl
	I0630 15:50:07.419718 1612198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0630 15:50:07.428821 1612198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:50:07.429857 1612198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:50:07.431314 1612198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:50:07.436938 1612198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:50:07.496676 1612198 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0630 15:50:07.496747 1612198 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0630 15:50:07.496769 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0630 15:50:07.496795 1612198 ssh_runner.go:195] Run: which crictl
	I0630 15:50:07.575932 1612198 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0630 15:50:07.575994 1612198 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0630 15:50:07.576020 1612198 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0630 15:50:07.576067 1612198 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:50:07.576114 1612198 ssh_runner.go:195] Run: which crictl
	I0630 15:50:07.576157 1612198 ssh_runner.go:195] Run: which crictl
	I0630 15:50:07.632145 1612198 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0630 15:50:07.632213 1612198 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:50:07.632273 1612198 ssh_runner.go:195] Run: which crictl
	I0630 15:50:07.632407 1612198 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0630 15:50:07.632481 1612198 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:50:07.632556 1612198 ssh_runner.go:195] Run: which crictl
	I0630 15:50:07.638026 1612198 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0630 15:50:07.638088 1612198 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:50:07.638135 1612198 ssh_runner.go:195] Run: which crictl
	I0630 15:50:07.638168 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0630 15:50:07.638135 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0630 15:50:07.638223 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0630 15:50:07.638291 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:50:07.640618 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:50:07.641449 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:50:07.782185 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0630 15:50:07.782254 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:50:07.782386 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0630 15:50:07.793711 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:50:07.793775 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0630 15:50:07.793804 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:50:07.793899 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:50:07.887665 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0630 15:50:07.887694 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:50:07.905539 1612198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0630 15:50:08.004737 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0630 15:50:08.004780 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0630 15:50:08.013058 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0630 15:50:08.013100 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0630 15:50:08.013221 1612198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0630 15:50:08.013237 1612198 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0630 15:50:08.086871 1612198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0630 15:50:08.116450 1612198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0630 15:50:08.134245 1612198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0630 15:50:08.134285 1612198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0630 15:50:08.134316 1612198 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0630 15:50:08.477922 1612198 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:50:08.617232 1612198 cache_images.go:92] duration metric: took 1.451991651s to LoadCachedImages
	W0630 15:50:08.617379 1612198 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0630 15:50:08.617449 1612198 kubeadm.go:926] updating node { 192.168.61.88 8443 v1.20.0 crio true true} ...
	I0630 15:50:08.617604 1612198 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-836310 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-836310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 15:50:08.617698 1612198 ssh_runner.go:195] Run: crio config
	I0630 15:50:08.671479 1612198 cni.go:84] Creating CNI manager for ""
	I0630 15:50:08.671507 1612198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 15:50:08.671517 1612198 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:50:08.671536 1612198 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.88 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-836310 NodeName:old-k8s-version-836310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0630 15:50:08.671673 1612198 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-836310"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:50:08.671745 1612198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0630 15:50:08.684860 1612198 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:50:08.684975 1612198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:50:08.696618 1612198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0630 15:50:08.718560 1612198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:50:08.739092 1612198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0630 15:50:08.764441 1612198 ssh_runner.go:195] Run: grep 192.168.61.88	control-plane.minikube.internal$ /etc/hosts
	I0630 15:50:08.768981 1612198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:50:08.787272 1612198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:50:08.947571 1612198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:50:08.984222 1612198 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310 for IP: 192.168.61.88
	I0630 15:50:08.984249 1612198 certs.go:194] generating shared ca certs ...
	I0630 15:50:08.984268 1612198 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:50:08.984448 1612198 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:50:08.984503 1612198 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:50:08.984517 1612198 certs.go:256] generating profile certs ...
	I0630 15:50:08.984627 1612198 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/client.key
	I0630 15:50:08.984698 1612198 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.key.326a3c9b
	I0630 15:50:08.984766 1612198 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.key
	I0630 15:50:08.984920 1612198 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:50:08.984960 1612198 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:50:08.984977 1612198 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:50:08.985014 1612198 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:50:08.985044 1612198 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:50:08.985079 1612198 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:50:08.985131 1612198 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:50:08.985787 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:50:09.018967 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:50:09.051933 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:50:09.084296 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:50:09.115030 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0630 15:50:09.154991 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 15:50:09.187679 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:50:09.223705 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/old-k8s-version-836310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 15:50:09.261825 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:50:09.294638 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:50:09.326812 1612198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:50:09.359377 1612198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:50:09.385033 1612198 ssh_runner.go:195] Run: openssl version
	I0630 15:50:09.391702 1612198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:50:09.407474 1612198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:50:09.413162 1612198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:50:09.413252 1612198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:50:09.421258 1612198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:50:09.435682 1612198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:50:09.449443 1612198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:50:09.454994 1612198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:50:09.455082 1612198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:50:09.463069 1612198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:50:09.476964 1612198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:50:09.491788 1612198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:50:09.498506 1612198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:50:09.498584 1612198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:50:09.507084 1612198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:50:09.522003 1612198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:50:09.527657 1612198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0630 15:50:09.535620 1612198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0630 15:50:09.542815 1612198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0630 15:50:09.551044 1612198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0630 15:50:09.558750 1612198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0630 15:50:09.566618 1612198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0630 15:50:09.574234 1612198 kubeadm.go:392] StartCluster: {Name:old-k8s-version-836310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k
8s-version-836310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:50:09.574332 1612198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:50:09.574386 1612198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:50:09.617683 1612198 cri.go:89] found id: ""
	I0630 15:50:09.617765 1612198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:50:09.629772 1612198 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0630 15:50:09.629799 1612198 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0630 15:50:09.629875 1612198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0630 15:50:09.642259 1612198 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:50:09.642725 1612198 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-836310" does not appear in /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:50:09.642836 1612198 kubeconfig.go:62] /home/jenkins/minikube-integration/20991-1550299/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-836310" cluster setting kubeconfig missing "old-k8s-version-836310" context setting]
	I0630 15:50:09.643087 1612198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:50:09.644241 1612198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0630 15:50:09.655875 1612198 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.61.88
	I0630 15:50:09.655929 1612198 kubeadm.go:1152] stopping kube-system containers ...
	I0630 15:50:09.655950 1612198 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0630 15:50:09.656028 1612198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:50:09.695616 1612198 cri.go:89] found id: ""
	I0630 15:50:09.695697 1612198 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0630 15:50:09.714132 1612198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:50:09.725324 1612198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:50:09.725353 1612198 kubeadm.go:157] found existing configuration files:
	
	I0630 15:50:09.725432 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:50:09.735610 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:50:09.735700 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:50:09.747137 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:50:09.757602 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:50:09.757675 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:50:09.768803 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:50:09.779028 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:50:09.779127 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:50:09.790526 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:50:09.801089 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:50:09.801163 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:50:09.812182 1612198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:50:09.824041 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:50:09.906754 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:50:10.825120 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:50:11.102667 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:50:11.210682 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0630 15:50:11.285425 1612198 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:50:11.285561 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:11.786647 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:12.285680 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:12.786332 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:13.285681 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:13.786036 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:14.285673 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:14.786087 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:15.285889 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:15.785991 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:16.286483 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:16.785936 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:17.285698 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:17.786634 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:18.286344 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:18.786376 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:19.285626 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:19.786618 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:20.285583 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:20.786470 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:21.285823 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:21.785759 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:22.285676 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:22.785653 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:23.286619 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:23.786581 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:24.285614 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:24.786495 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:25.285705 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:25.785675 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:26.286283 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:26.786536 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:27.286685 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:27.786207 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:28.286451 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:28.786543 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:29.286705 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:29.786498 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:30.286527 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:30.786110 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:31.285726 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:31.786322 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:32.285805 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:32.786359 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:33.286358 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:33.786282 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:34.286511 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:34.785945 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:35.286466 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:35.786460 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:36.285631 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:36.786744 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:37.286693 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:37.786025 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:38.286503 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:38.785613 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:39.285813 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:39.786672 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:40.285720 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:40.786514 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:41.286196 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:41.786612 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:42.285664 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:42.786349 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:43.286621 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:43.786223 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:44.286048 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:44.785851 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:45.285594 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:45.785603 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:46.285608 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:46.785610 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:47.285619 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:47.786553 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:48.286710 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:48.786437 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:49.286417 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:49.786053 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:50.286546 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:50.786045 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:51.285861 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:51.786667 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:52.285608 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:52.785672 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:53.285731 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:53.786312 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:54.285733 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:54.785691 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:55.285754 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:55.786592 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:56.285782 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:56.786392 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:57.286117 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:57.785816 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:58.286184 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:58.786652 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:59.285636 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:50:59.786245 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:00.285598 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:00.786311 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:01.286540 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:01.785674 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:02.285717 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:02.786087 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:03.285759 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:03.785597 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:04.285711 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:04.786696 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:05.286352 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:05.785628 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:06.286651 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:06.786260 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:07.286073 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:07.786404 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:08.285874 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:08.786529 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:09.285601 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:09.786227 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:10.286615 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:10.786018 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:11.286676 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:11.286764 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:11.339862 1612198 cri.go:89] found id: ""
	I0630 15:51:11.339894 1612198 logs.go:282] 0 containers: []
	W0630 15:51:11.339903 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:11.339910 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:11.339981 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:11.387621 1612198 cri.go:89] found id: ""
	I0630 15:51:11.387654 1612198 logs.go:282] 0 containers: []
	W0630 15:51:11.387665 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:11.387672 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:11.387765 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:11.437424 1612198 cri.go:89] found id: ""
	I0630 15:51:11.437467 1612198 logs.go:282] 0 containers: []
	W0630 15:51:11.437482 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:11.437511 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:11.437596 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:11.487497 1612198 cri.go:89] found id: ""
	I0630 15:51:11.487527 1612198 logs.go:282] 0 containers: []
	W0630 15:51:11.487538 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:11.487546 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:11.487640 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:11.533152 1612198 cri.go:89] found id: ""
	I0630 15:51:11.533184 1612198 logs.go:282] 0 containers: []
	W0630 15:51:11.533194 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:11.533202 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:11.533297 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:11.573253 1612198 cri.go:89] found id: ""
	I0630 15:51:11.573290 1612198 logs.go:282] 0 containers: []
	W0630 15:51:11.573302 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:11.573311 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:11.573388 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:11.613150 1612198 cri.go:89] found id: ""
	I0630 15:51:11.613185 1612198 logs.go:282] 0 containers: []
	W0630 15:51:11.613197 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:11.613205 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:11.613279 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:11.660254 1612198 cri.go:89] found id: ""
	I0630 15:51:11.660287 1612198 logs.go:282] 0 containers: []
	W0630 15:51:11.660299 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:11.660312 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:11.660342 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:11.715677 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:11.715724 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:11.733453 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:11.733497 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:11.845304 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:11.845333 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:11.845350 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:11.919989 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:11.920033 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:14.468817 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:14.492583 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:14.493021 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:14.546214 1612198 cri.go:89] found id: ""
	I0630 15:51:14.546265 1612198 logs.go:282] 0 containers: []
	W0630 15:51:14.546277 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:14.546286 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:14.546379 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:14.587814 1612198 cri.go:89] found id: ""
	I0630 15:51:14.587884 1612198 logs.go:282] 0 containers: []
	W0630 15:51:14.587899 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:14.587908 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:14.587988 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:14.636039 1612198 cri.go:89] found id: ""
	I0630 15:51:14.636072 1612198 logs.go:282] 0 containers: []
	W0630 15:51:14.636083 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:14.636113 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:14.636189 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:14.681061 1612198 cri.go:89] found id: ""
	I0630 15:51:14.681104 1612198 logs.go:282] 0 containers: []
	W0630 15:51:14.681120 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:14.681130 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:14.681216 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:14.731682 1612198 cri.go:89] found id: ""
	I0630 15:51:14.731714 1612198 logs.go:282] 0 containers: []
	W0630 15:51:14.731726 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:14.731733 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:14.731800 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:14.793866 1612198 cri.go:89] found id: ""
	I0630 15:51:14.793894 1612198 logs.go:282] 0 containers: []
	W0630 15:51:14.793903 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:14.793917 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:14.793988 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:14.843694 1612198 cri.go:89] found id: ""
	I0630 15:51:14.843728 1612198 logs.go:282] 0 containers: []
	W0630 15:51:14.843743 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:14.843751 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:14.843817 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:14.885446 1612198 cri.go:89] found id: ""
	I0630 15:51:14.885507 1612198 logs.go:282] 0 containers: []
	W0630 15:51:14.885522 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:14.885539 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:14.885572 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:14.993572 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:14.993607 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:14.993627 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:15.082998 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:15.083051 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:15.126440 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:15.126479 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:15.176605 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:15.176653 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:17.691536 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:17.709852 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:17.709951 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:17.761187 1612198 cri.go:89] found id: ""
	I0630 15:51:17.761224 1612198 logs.go:282] 0 containers: []
	W0630 15:51:17.761237 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:17.761246 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:17.761318 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:17.797283 1612198 cri.go:89] found id: ""
	I0630 15:51:17.797319 1612198 logs.go:282] 0 containers: []
	W0630 15:51:17.797330 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:17.797339 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:17.797422 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:17.835718 1612198 cri.go:89] found id: ""
	I0630 15:51:17.835754 1612198 logs.go:282] 0 containers: []
	W0630 15:51:17.835780 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:17.835786 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:17.835852 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:17.877346 1612198 cri.go:89] found id: ""
	I0630 15:51:17.877377 1612198 logs.go:282] 0 containers: []
	W0630 15:51:17.877386 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:17.877392 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:17.877485 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:17.914730 1612198 cri.go:89] found id: ""
	I0630 15:51:17.914776 1612198 logs.go:282] 0 containers: []
	W0630 15:51:17.914791 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:17.914800 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:17.914895 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:17.957634 1612198 cri.go:89] found id: ""
	I0630 15:51:17.957679 1612198 logs.go:282] 0 containers: []
	W0630 15:51:17.957690 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:17.957699 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:17.957774 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:17.998347 1612198 cri.go:89] found id: ""
	I0630 15:51:17.998389 1612198 logs.go:282] 0 containers: []
	W0630 15:51:17.998403 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:17.998413 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:17.998489 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:18.046418 1612198 cri.go:89] found id: ""
	I0630 15:51:18.046456 1612198 logs.go:282] 0 containers: []
	W0630 15:51:18.046472 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:18.046488 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:18.046505 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:18.116020 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:18.116073 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:18.130550 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:18.130601 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:18.207367 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:18.207399 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:18.207423 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:18.285579 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:18.285634 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:20.831084 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:20.858482 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:20.858563 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:20.917246 1612198 cri.go:89] found id: ""
	I0630 15:51:20.917278 1612198 logs.go:282] 0 containers: []
	W0630 15:51:20.917290 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:20.917298 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:20.917368 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:20.986086 1612198 cri.go:89] found id: ""
	I0630 15:51:20.986188 1612198 logs.go:282] 0 containers: []
	W0630 15:51:20.986207 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:20.986217 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:20.986500 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:21.039812 1612198 cri.go:89] found id: ""
	I0630 15:51:21.039855 1612198 logs.go:282] 0 containers: []
	W0630 15:51:21.039870 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:21.039881 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:21.040003 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:21.086872 1612198 cri.go:89] found id: ""
	I0630 15:51:21.086908 1612198 logs.go:282] 0 containers: []
	W0630 15:51:21.086921 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:21.086932 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:21.086995 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:21.131641 1612198 cri.go:89] found id: ""
	I0630 15:51:21.131670 1612198 logs.go:282] 0 containers: []
	W0630 15:51:21.131682 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:21.131691 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:21.131767 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:21.176744 1612198 cri.go:89] found id: ""
	I0630 15:51:21.176775 1612198 logs.go:282] 0 containers: []
	W0630 15:51:21.176787 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:21.176796 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:21.176864 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:21.226229 1612198 cri.go:89] found id: ""
	I0630 15:51:21.226266 1612198 logs.go:282] 0 containers: []
	W0630 15:51:21.226280 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:21.226290 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:21.226367 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:21.275731 1612198 cri.go:89] found id: ""
	I0630 15:51:21.275777 1612198 logs.go:282] 0 containers: []
	W0630 15:51:21.275790 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:21.275807 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:21.275831 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:21.338778 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:21.338830 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:21.359615 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:21.359651 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:21.441862 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:21.441911 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:21.441927 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:21.535027 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:21.535079 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:24.082074 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:24.102403 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:24.102507 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:24.158406 1612198 cri.go:89] found id: ""
	I0630 15:51:24.158447 1612198 logs.go:282] 0 containers: []
	W0630 15:51:24.158462 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:24.158473 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:24.158556 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:24.206636 1612198 cri.go:89] found id: ""
	I0630 15:51:24.206677 1612198 logs.go:282] 0 containers: []
	W0630 15:51:24.206692 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:24.206702 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:24.206776 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:24.257737 1612198 cri.go:89] found id: ""
	I0630 15:51:24.257783 1612198 logs.go:282] 0 containers: []
	W0630 15:51:24.257796 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:24.257805 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:24.257874 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:24.309833 1612198 cri.go:89] found id: ""
	I0630 15:51:24.309865 1612198 logs.go:282] 0 containers: []
	W0630 15:51:24.309876 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:24.309884 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:24.309953 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:24.374861 1612198 cri.go:89] found id: ""
	I0630 15:51:24.374903 1612198 logs.go:282] 0 containers: []
	W0630 15:51:24.374918 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:24.374931 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:24.375022 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:24.426575 1612198 cri.go:89] found id: ""
	I0630 15:51:24.426611 1612198 logs.go:282] 0 containers: []
	W0630 15:51:24.426625 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:24.426636 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:24.426705 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:24.482371 1612198 cri.go:89] found id: ""
	I0630 15:51:24.482407 1612198 logs.go:282] 0 containers: []
	W0630 15:51:24.482421 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:24.482430 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:24.482501 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:24.537207 1612198 cri.go:89] found id: ""
	I0630 15:51:24.537259 1612198 logs.go:282] 0 containers: []
	W0630 15:51:24.537271 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:24.537292 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:24.537313 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:24.619877 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:24.620029 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:24.653806 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:24.653857 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:24.767254 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:24.767297 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:24.767317 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:24.878233 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:24.878286 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:27.437580 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:27.458594 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:27.458682 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:27.505703 1612198 cri.go:89] found id: ""
	I0630 15:51:27.505744 1612198 logs.go:282] 0 containers: []
	W0630 15:51:27.505755 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:27.505764 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:27.505833 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:27.550485 1612198 cri.go:89] found id: ""
	I0630 15:51:27.550521 1612198 logs.go:282] 0 containers: []
	W0630 15:51:27.550530 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:27.550538 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:27.550609 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:27.597394 1612198 cri.go:89] found id: ""
	I0630 15:51:27.597454 1612198 logs.go:282] 0 containers: []
	W0630 15:51:27.597466 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:27.597477 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:27.597544 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:27.641270 1612198 cri.go:89] found id: ""
	I0630 15:51:27.641310 1612198 logs.go:282] 0 containers: []
	W0630 15:51:27.641322 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:27.641331 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:27.641443 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:27.685735 1612198 cri.go:89] found id: ""
	I0630 15:51:27.685819 1612198 logs.go:282] 0 containers: []
	W0630 15:51:27.685836 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:27.685845 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:27.685916 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:27.730226 1612198 cri.go:89] found id: ""
	I0630 15:51:27.730261 1612198 logs.go:282] 0 containers: []
	W0630 15:51:27.730273 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:27.730281 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:27.730350 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:27.777867 1612198 cri.go:89] found id: ""
	I0630 15:51:27.777901 1612198 logs.go:282] 0 containers: []
	W0630 15:51:27.777912 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:27.777920 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:27.777986 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:27.821930 1612198 cri.go:89] found id: ""
	I0630 15:51:27.821957 1612198 logs.go:282] 0 containers: []
	W0630 15:51:27.821971 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:27.821981 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:27.821995 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:27.901915 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:27.901949 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:27.901966 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:27.983042 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:27.983087 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:28.027443 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:28.027480 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:28.083188 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:28.083233 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:30.599737 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:30.621016 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:30.621109 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:30.671259 1612198 cri.go:89] found id: ""
	I0630 15:51:30.671300 1612198 logs.go:282] 0 containers: []
	W0630 15:51:30.671313 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:30.671323 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:30.671407 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:30.720528 1612198 cri.go:89] found id: ""
	I0630 15:51:30.720572 1612198 logs.go:282] 0 containers: []
	W0630 15:51:30.720585 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:30.720593 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:30.720667 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:30.765167 1612198 cri.go:89] found id: ""
	I0630 15:51:30.765209 1612198 logs.go:282] 0 containers: []
	W0630 15:51:30.765220 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:30.765227 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:30.765296 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:30.811602 1612198 cri.go:89] found id: ""
	I0630 15:51:30.811634 1612198 logs.go:282] 0 containers: []
	W0630 15:51:30.811645 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:30.811653 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:30.811707 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:30.864471 1612198 cri.go:89] found id: ""
	I0630 15:51:30.864503 1612198 logs.go:282] 0 containers: []
	W0630 15:51:30.864515 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:30.864524 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:30.864586 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:30.914521 1612198 cri.go:89] found id: ""
	I0630 15:51:30.914553 1612198 logs.go:282] 0 containers: []
	W0630 15:51:30.914564 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:30.914573 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:30.914638 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:30.973118 1612198 cri.go:89] found id: ""
	I0630 15:51:30.973159 1612198 logs.go:282] 0 containers: []
	W0630 15:51:30.973170 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:30.973179 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:30.973248 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:31.022277 1612198 cri.go:89] found id: ""
	I0630 15:51:31.022324 1612198 logs.go:282] 0 containers: []
	W0630 15:51:31.022339 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:31.022353 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:31.022371 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:31.103700 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:31.103803 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:31.124678 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:31.124722 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:31.232411 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:31.232442 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:31.232459 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:31.324741 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:31.324797 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:33.878364 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:33.902982 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:33.903082 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:33.961380 1612198 cri.go:89] found id: ""
	I0630 15:51:33.961443 1612198 logs.go:282] 0 containers: []
	W0630 15:51:33.961456 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:33.961465 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:33.961523 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:34.016807 1612198 cri.go:89] found id: ""
	I0630 15:51:34.016848 1612198 logs.go:282] 0 containers: []
	W0630 15:51:34.016860 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:34.016868 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:34.016936 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:34.073164 1612198 cri.go:89] found id: ""
	I0630 15:51:34.073201 1612198 logs.go:282] 0 containers: []
	W0630 15:51:34.073213 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:34.073221 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:34.073310 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:34.124577 1612198 cri.go:89] found id: ""
	I0630 15:51:34.124609 1612198 logs.go:282] 0 containers: []
	W0630 15:51:34.124618 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:34.124626 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:34.124691 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:34.178191 1612198 cri.go:89] found id: ""
	I0630 15:51:34.178225 1612198 logs.go:282] 0 containers: []
	W0630 15:51:34.178237 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:34.178294 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:34.178370 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:34.223728 1612198 cri.go:89] found id: ""
	I0630 15:51:34.223763 1612198 logs.go:282] 0 containers: []
	W0630 15:51:34.223775 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:34.223784 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:34.223869 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:34.273046 1612198 cri.go:89] found id: ""
	I0630 15:51:34.273081 1612198 logs.go:282] 0 containers: []
	W0630 15:51:34.273093 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:34.273101 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:34.273166 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:34.324441 1612198 cri.go:89] found id: ""
	I0630 15:51:34.324477 1612198 logs.go:282] 0 containers: []
	W0630 15:51:34.324485 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:34.324495 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:34.324510 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:34.430592 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:34.430712 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:34.486840 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:34.486889 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:34.555976 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:34.556025 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:34.576527 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:34.576575 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:34.676314 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:37.177569 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:37.201791 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:37.201873 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:37.269056 1612198 cri.go:89] found id: ""
	I0630 15:51:37.269082 1612198 logs.go:282] 0 containers: []
	W0630 15:51:37.269089 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:37.269100 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:37.269155 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:37.317818 1612198 cri.go:89] found id: ""
	I0630 15:51:37.317846 1612198 logs.go:282] 0 containers: []
	W0630 15:51:37.317857 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:37.317875 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:37.317940 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:37.366582 1612198 cri.go:89] found id: ""
	I0630 15:51:37.366609 1612198 logs.go:282] 0 containers: []
	W0630 15:51:37.366619 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:37.366627 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:37.366695 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:37.414655 1612198 cri.go:89] found id: ""
	I0630 15:51:37.414691 1612198 logs.go:282] 0 containers: []
	W0630 15:51:37.414704 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:37.414714 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:37.414787 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:37.465683 1612198 cri.go:89] found id: ""
	I0630 15:51:37.465722 1612198 logs.go:282] 0 containers: []
	W0630 15:51:37.465735 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:37.465744 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:37.465833 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:37.522577 1612198 cri.go:89] found id: ""
	I0630 15:51:37.522607 1612198 logs.go:282] 0 containers: []
	W0630 15:51:37.522615 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:37.522621 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:37.522685 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:37.598689 1612198 cri.go:89] found id: ""
	I0630 15:51:37.598723 1612198 logs.go:282] 0 containers: []
	W0630 15:51:37.598736 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:37.598745 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:37.598807 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:37.653627 1612198 cri.go:89] found id: ""
	I0630 15:51:37.653679 1612198 logs.go:282] 0 containers: []
	W0630 15:51:37.653695 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:37.653714 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:37.653740 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:37.705757 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:37.705808 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:37.763021 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:37.763064 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:37.783170 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:37.783204 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:37.869639 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:37.869668 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:37.869688 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:40.469110 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:40.493727 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:40.493830 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:40.542885 1612198 cri.go:89] found id: ""
	I0630 15:51:40.542916 1612198 logs.go:282] 0 containers: []
	W0630 15:51:40.542929 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:40.542937 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:40.543001 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:40.596323 1612198 cri.go:89] found id: ""
	I0630 15:51:40.596403 1612198 logs.go:282] 0 containers: []
	W0630 15:51:40.596433 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:40.596445 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:40.596627 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:40.637819 1612198 cri.go:89] found id: ""
	I0630 15:51:40.637858 1612198 logs.go:282] 0 containers: []
	W0630 15:51:40.637872 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:40.637884 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:40.637961 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:40.677144 1612198 cri.go:89] found id: ""
	I0630 15:51:40.677181 1612198 logs.go:282] 0 containers: []
	W0630 15:51:40.677195 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:40.677205 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:40.677282 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:40.726836 1612198 cri.go:89] found id: ""
	I0630 15:51:40.726879 1612198 logs.go:282] 0 containers: []
	W0630 15:51:40.726894 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:40.726903 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:40.726997 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:40.794482 1612198 cri.go:89] found id: ""
	I0630 15:51:40.794518 1612198 logs.go:282] 0 containers: []
	W0630 15:51:40.794530 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:40.794546 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:40.794623 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:40.855733 1612198 cri.go:89] found id: ""
	I0630 15:51:40.855769 1612198 logs.go:282] 0 containers: []
	W0630 15:51:40.855788 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:40.855798 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:40.855865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:40.918210 1612198 cri.go:89] found id: ""
	I0630 15:51:40.918243 1612198 logs.go:282] 0 containers: []
	W0630 15:51:40.918254 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:40.918268 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:40.918286 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:40.978377 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:40.978471 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:40.997558 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:40.997590 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:41.090102 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:41.090150 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:41.090169 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:41.174881 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:41.174943 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:43.724984 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:43.746973 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:43.747043 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:43.798475 1612198 cri.go:89] found id: ""
	I0630 15:51:43.798533 1612198 logs.go:282] 0 containers: []
	W0630 15:51:43.798545 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:43.798553 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:43.798626 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:43.842538 1612198 cri.go:89] found id: ""
	I0630 15:51:43.842579 1612198 logs.go:282] 0 containers: []
	W0630 15:51:43.842591 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:43.842600 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:43.842688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:43.898295 1612198 cri.go:89] found id: ""
	I0630 15:51:43.898331 1612198 logs.go:282] 0 containers: []
	W0630 15:51:43.898345 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:43.898353 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:43.898451 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:43.957925 1612198 cri.go:89] found id: ""
	I0630 15:51:43.957968 1612198 logs.go:282] 0 containers: []
	W0630 15:51:43.957980 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:43.957989 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:43.958060 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:44.010619 1612198 cri.go:89] found id: ""
	I0630 15:51:44.010646 1612198 logs.go:282] 0 containers: []
	W0630 15:51:44.010654 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:44.010661 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:44.010714 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:44.062800 1612198 cri.go:89] found id: ""
	I0630 15:51:44.062839 1612198 logs.go:282] 0 containers: []
	W0630 15:51:44.062851 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:44.062861 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:44.062935 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:44.106862 1612198 cri.go:89] found id: ""
	I0630 15:51:44.106894 1612198 logs.go:282] 0 containers: []
	W0630 15:51:44.106907 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:44.106916 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:44.106982 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:44.161642 1612198 cri.go:89] found id: ""
	I0630 15:51:44.161677 1612198 logs.go:282] 0 containers: []
	W0630 15:51:44.161686 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:44.161697 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:44.161709 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:44.225185 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:44.225242 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:44.260509 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:44.260550 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:44.384432 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:44.384460 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:44.384477 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:44.492195 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:44.492256 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:47.053552 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:47.086176 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:47.086247 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:47.144755 1612198 cri.go:89] found id: ""
	I0630 15:51:47.144789 1612198 logs.go:282] 0 containers: []
	W0630 15:51:47.144801 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:47.144809 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:47.144884 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:47.193686 1612198 cri.go:89] found id: ""
	I0630 15:51:47.193720 1612198 logs.go:282] 0 containers: []
	W0630 15:51:47.193731 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:47.193740 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:47.193812 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:47.244489 1612198 cri.go:89] found id: ""
	I0630 15:51:47.244526 1612198 logs.go:282] 0 containers: []
	W0630 15:51:47.244537 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:47.244545 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:47.244625 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:47.290277 1612198 cri.go:89] found id: ""
	I0630 15:51:47.290315 1612198 logs.go:282] 0 containers: []
	W0630 15:51:47.290326 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:47.290335 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:47.290413 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:47.340656 1612198 cri.go:89] found id: ""
	I0630 15:51:47.340689 1612198 logs.go:282] 0 containers: []
	W0630 15:51:47.340700 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:47.340708 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:47.340782 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:47.404132 1612198 cri.go:89] found id: ""
	I0630 15:51:47.404168 1612198 logs.go:282] 0 containers: []
	W0630 15:51:47.404179 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:47.404187 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:47.404257 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:47.455579 1612198 cri.go:89] found id: ""
	I0630 15:51:47.455654 1612198 logs.go:282] 0 containers: []
	W0630 15:51:47.455667 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:47.455675 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:47.455750 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:47.500470 1612198 cri.go:89] found id: ""
	I0630 15:51:47.500509 1612198 logs.go:282] 0 containers: []
	W0630 15:51:47.500522 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:47.500537 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:47.500556 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:47.559075 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:47.559124 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:47.615956 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:47.616004 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:47.635647 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:47.635691 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:47.738937 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:47.738972 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:47.738989 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:50.344476 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:50.363886 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:50.363983 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:50.409519 1612198 cri.go:89] found id: ""
	I0630 15:51:50.409553 1612198 logs.go:282] 0 containers: []
	W0630 15:51:50.409566 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:50.409576 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:50.409650 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:50.456683 1612198 cri.go:89] found id: ""
	I0630 15:51:50.456721 1612198 logs.go:282] 0 containers: []
	W0630 15:51:50.456734 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:50.456742 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:50.456810 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:50.506816 1612198 cri.go:89] found id: ""
	I0630 15:51:50.506860 1612198 logs.go:282] 0 containers: []
	W0630 15:51:50.506874 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:50.506884 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:50.507041 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:50.555757 1612198 cri.go:89] found id: ""
	I0630 15:51:50.555793 1612198 logs.go:282] 0 containers: []
	W0630 15:51:50.555804 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:50.555812 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:50.555885 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:50.605837 1612198 cri.go:89] found id: ""
	I0630 15:51:50.605873 1612198 logs.go:282] 0 containers: []
	W0630 15:51:50.605889 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:50.605897 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:50.605977 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:50.646632 1612198 cri.go:89] found id: ""
	I0630 15:51:50.646665 1612198 logs.go:282] 0 containers: []
	W0630 15:51:50.646677 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:50.646685 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:50.646751 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:50.689708 1612198 cri.go:89] found id: ""
	I0630 15:51:50.689747 1612198 logs.go:282] 0 containers: []
	W0630 15:51:50.689756 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:50.689763 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:50.689823 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:50.731053 1612198 cri.go:89] found id: ""
	I0630 15:51:50.731085 1612198 logs.go:282] 0 containers: []
	W0630 15:51:50.731097 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:50.731109 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:50.731126 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:50.786238 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:50.786285 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:50.802283 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:50.802326 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:50.884245 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:50.884275 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:50.884294 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:50.962642 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:50.962702 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:53.515469 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:53.539024 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:53.539121 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:53.590983 1612198 cri.go:89] found id: ""
	I0630 15:51:53.591032 1612198 logs.go:282] 0 containers: []
	W0630 15:51:53.591045 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:53.591055 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:53.591176 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:53.635201 1612198 cri.go:89] found id: ""
	I0630 15:51:53.635240 1612198 logs.go:282] 0 containers: []
	W0630 15:51:53.635251 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:53.635260 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:53.635329 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:53.683303 1612198 cri.go:89] found id: ""
	I0630 15:51:53.683477 1612198 logs.go:282] 0 containers: []
	W0630 15:51:53.683526 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:53.683552 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:53.683657 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:53.738779 1612198 cri.go:89] found id: ""
	I0630 15:51:53.738819 1612198 logs.go:282] 0 containers: []
	W0630 15:51:53.738832 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:53.738840 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:53.738925 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:53.783382 1612198 cri.go:89] found id: ""
	I0630 15:51:53.783417 1612198 logs.go:282] 0 containers: []
	W0630 15:51:53.783428 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:53.783437 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:53.783516 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:53.831654 1612198 cri.go:89] found id: ""
	I0630 15:51:53.831687 1612198 logs.go:282] 0 containers: []
	W0630 15:51:53.831714 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:53.831725 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:53.831884 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:53.887834 1612198 cri.go:89] found id: ""
	I0630 15:51:53.887875 1612198 logs.go:282] 0 containers: []
	W0630 15:51:53.887890 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:53.887901 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:53.887996 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:53.937164 1612198 cri.go:89] found id: ""
	I0630 15:51:53.937261 1612198 logs.go:282] 0 containers: []
	W0630 15:51:53.937293 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:53.937330 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:53.937363 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:54.016951 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:54.017078 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:54.045565 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:54.045600 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:54.147148 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:51:54.147174 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:54.147189 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:54.271056 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:54.271132 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:56.837591 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:51:56.857288 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:51:56.857382 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:51:56.897161 1612198 cri.go:89] found id: ""
	I0630 15:51:56.897205 1612198 logs.go:282] 0 containers: []
	W0630 15:51:56.897216 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:51:56.897224 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:51:56.897300 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:51:56.949568 1612198 cri.go:89] found id: ""
	I0630 15:51:56.949610 1612198 logs.go:282] 0 containers: []
	W0630 15:51:56.949623 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:51:56.949632 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:51:56.949721 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:51:56.998176 1612198 cri.go:89] found id: ""
	I0630 15:51:56.998210 1612198 logs.go:282] 0 containers: []
	W0630 15:51:56.998223 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:51:56.998231 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:51:56.998303 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:51:57.050502 1612198 cri.go:89] found id: ""
	I0630 15:51:57.050537 1612198 logs.go:282] 0 containers: []
	W0630 15:51:57.050551 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:51:57.050560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:51:57.050630 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:51:57.118371 1612198 cri.go:89] found id: ""
	I0630 15:51:57.118398 1612198 logs.go:282] 0 containers: []
	W0630 15:51:57.118418 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:51:57.118425 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:51:57.118487 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:51:57.166283 1612198 cri.go:89] found id: ""
	I0630 15:51:57.166321 1612198 logs.go:282] 0 containers: []
	W0630 15:51:57.166332 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:51:57.166341 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:51:57.166427 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:51:57.218989 1612198 cri.go:89] found id: ""
	I0630 15:51:57.219028 1612198 logs.go:282] 0 containers: []
	W0630 15:51:57.219039 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:51:57.219048 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:51:57.219139 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:51:57.267817 1612198 cri.go:89] found id: ""
	I0630 15:51:57.267859 1612198 logs.go:282] 0 containers: []
	W0630 15:51:57.267873 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:51:57.267887 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:51:57.267904 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:51:57.389474 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:51:57.389546 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:51:57.439813 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:51:57.439850 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:51:57.503585 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:51:57.503700 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:51:57.525341 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:51:57.525382 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:51:57.622274 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:00.123112 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:00.140843 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:00.140926 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:00.183399 1612198 cri.go:89] found id: ""
	I0630 15:52:00.183456 1612198 logs.go:282] 0 containers: []
	W0630 15:52:00.183468 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:00.183478 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:00.183573 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:00.228176 1612198 cri.go:89] found id: ""
	I0630 15:52:00.228217 1612198 logs.go:282] 0 containers: []
	W0630 15:52:00.228227 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:00.228234 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:00.228296 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:00.270412 1612198 cri.go:89] found id: ""
	I0630 15:52:00.270455 1612198 logs.go:282] 0 containers: []
	W0630 15:52:00.270465 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:00.270471 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:00.270531 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:00.312848 1612198 cri.go:89] found id: ""
	I0630 15:52:00.312882 1612198 logs.go:282] 0 containers: []
	W0630 15:52:00.312894 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:00.312902 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:00.312973 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:00.352398 1612198 cri.go:89] found id: ""
	I0630 15:52:00.352434 1612198 logs.go:282] 0 containers: []
	W0630 15:52:00.352442 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:00.352448 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:00.352509 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:00.393652 1612198 cri.go:89] found id: ""
	I0630 15:52:00.393697 1612198 logs.go:282] 0 containers: []
	W0630 15:52:00.393710 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:00.393719 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:00.393784 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:00.434178 1612198 cri.go:89] found id: ""
	I0630 15:52:00.434218 1612198 logs.go:282] 0 containers: []
	W0630 15:52:00.434230 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:00.434239 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:00.434312 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:00.474604 1612198 cri.go:89] found id: ""
	I0630 15:52:00.474635 1612198 logs.go:282] 0 containers: []
	W0630 15:52:00.474648 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:00.474663 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:00.474682 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:00.525683 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:00.525725 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:00.541536 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:00.541581 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:00.628711 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:00.628735 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:00.628749 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:00.715312 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:00.715365 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:03.258884 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:03.280387 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:03.280513 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:03.318270 1612198 cri.go:89] found id: ""
	I0630 15:52:03.318311 1612198 logs.go:282] 0 containers: []
	W0630 15:52:03.318324 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:03.318334 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:03.318415 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:03.360367 1612198 cri.go:89] found id: ""
	I0630 15:52:03.360411 1612198 logs.go:282] 0 containers: []
	W0630 15:52:03.360424 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:03.360434 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:03.360508 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:03.406359 1612198 cri.go:89] found id: ""
	I0630 15:52:03.406416 1612198 logs.go:282] 0 containers: []
	W0630 15:52:03.406431 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:03.406443 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:03.406548 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:03.444939 1612198 cri.go:89] found id: ""
	I0630 15:52:03.444977 1612198 logs.go:282] 0 containers: []
	W0630 15:52:03.444990 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:03.445000 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:03.445089 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:03.489114 1612198 cri.go:89] found id: ""
	I0630 15:52:03.489168 1612198 logs.go:282] 0 containers: []
	W0630 15:52:03.489177 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:03.489183 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:03.489243 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:03.533378 1612198 cri.go:89] found id: ""
	I0630 15:52:03.533428 1612198 logs.go:282] 0 containers: []
	W0630 15:52:03.533440 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:03.533450 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:03.533511 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:03.572125 1612198 cri.go:89] found id: ""
	I0630 15:52:03.572166 1612198 logs.go:282] 0 containers: []
	W0630 15:52:03.572178 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:03.572187 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:03.572262 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:03.620152 1612198 cri.go:89] found id: ""
	I0630 15:52:03.620186 1612198 logs.go:282] 0 containers: []
	W0630 15:52:03.620194 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:03.620205 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:03.620218 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:03.671813 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:03.671875 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:03.688282 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:03.688337 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:03.762894 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:03.762932 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:03.762952 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:03.846484 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:03.846531 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:06.403139 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:06.425511 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:06.425577 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:06.466461 1612198 cri.go:89] found id: ""
	I0630 15:52:06.466490 1612198 logs.go:282] 0 containers: []
	W0630 15:52:06.466499 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:06.466505 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:06.466569 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:06.514674 1612198 cri.go:89] found id: ""
	I0630 15:52:06.514712 1612198 logs.go:282] 0 containers: []
	W0630 15:52:06.514723 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:06.514732 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:06.514788 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:06.551741 1612198 cri.go:89] found id: ""
	I0630 15:52:06.551775 1612198 logs.go:282] 0 containers: []
	W0630 15:52:06.551789 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:06.551797 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:06.551865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:06.590450 1612198 cri.go:89] found id: ""
	I0630 15:52:06.590493 1612198 logs.go:282] 0 containers: []
	W0630 15:52:06.590501 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:06.590508 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:06.590583 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:06.630624 1612198 cri.go:89] found id: ""
	I0630 15:52:06.630660 1612198 logs.go:282] 0 containers: []
	W0630 15:52:06.630671 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:06.630678 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:06.630747 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:06.677292 1612198 cri.go:89] found id: ""
	I0630 15:52:06.677333 1612198 logs.go:282] 0 containers: []
	W0630 15:52:06.677349 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:06.677360 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:06.677479 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:06.725668 1612198 cri.go:89] found id: ""
	I0630 15:52:06.725704 1612198 logs.go:282] 0 containers: []
	W0630 15:52:06.725712 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:06.725719 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:06.725795 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:06.777511 1612198 cri.go:89] found id: ""
	I0630 15:52:06.777546 1612198 logs.go:282] 0 containers: []
	W0630 15:52:06.777569 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:06.777595 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:06.777618 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:06.795211 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:06.795263 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:06.876068 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:06.876115 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:06.876134 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:06.972774 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:06.972819 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:07.020056 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:07.020091 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:09.587055 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:09.615741 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:09.615824 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:09.681142 1612198 cri.go:89] found id: ""
	I0630 15:52:09.681181 1612198 logs.go:282] 0 containers: []
	W0630 15:52:09.681193 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:09.681204 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:09.681287 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:09.730586 1612198 cri.go:89] found id: ""
	I0630 15:52:09.730622 1612198 logs.go:282] 0 containers: []
	W0630 15:52:09.730635 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:09.730644 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:09.730717 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:09.779370 1612198 cri.go:89] found id: ""
	I0630 15:52:09.779411 1612198 logs.go:282] 0 containers: []
	W0630 15:52:09.779423 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:09.779435 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:09.779523 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:09.835047 1612198 cri.go:89] found id: ""
	I0630 15:52:09.835082 1612198 logs.go:282] 0 containers: []
	W0630 15:52:09.835093 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:09.835102 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:09.835173 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:09.887124 1612198 cri.go:89] found id: ""
	I0630 15:52:09.887155 1612198 logs.go:282] 0 containers: []
	W0630 15:52:09.887163 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:09.887170 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:09.887227 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:09.946150 1612198 cri.go:89] found id: ""
	I0630 15:52:09.946183 1612198 logs.go:282] 0 containers: []
	W0630 15:52:09.946195 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:09.946204 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:09.946271 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:09.998962 1612198 cri.go:89] found id: ""
	I0630 15:52:09.999000 1612198 logs.go:282] 0 containers: []
	W0630 15:52:09.999014 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:09.999023 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:09.999118 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:10.050389 1612198 cri.go:89] found id: ""
	I0630 15:52:10.050427 1612198 logs.go:282] 0 containers: []
	W0630 15:52:10.050442 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:10.050458 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:10.050490 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:10.069683 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:10.069726 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:10.174249 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:10.174290 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:10.174309 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:10.285599 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:10.285728 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:10.345190 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:10.345225 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:12.910985 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:12.944070 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:12.944157 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:13.021709 1612198 cri.go:89] found id: ""
	I0630 15:52:13.021744 1612198 logs.go:282] 0 containers: []
	W0630 15:52:13.021756 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:13.021764 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:13.021838 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:13.076395 1612198 cri.go:89] found id: ""
	I0630 15:52:13.076430 1612198 logs.go:282] 0 containers: []
	W0630 15:52:13.076443 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:13.076452 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:13.076527 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:13.127329 1612198 cri.go:89] found id: ""
	I0630 15:52:13.127369 1612198 logs.go:282] 0 containers: []
	W0630 15:52:13.127389 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:13.127399 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:13.127470 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:13.185523 1612198 cri.go:89] found id: ""
	I0630 15:52:13.185562 1612198 logs.go:282] 0 containers: []
	W0630 15:52:13.185575 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:13.185584 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:13.185661 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:13.246195 1612198 cri.go:89] found id: ""
	I0630 15:52:13.246233 1612198 logs.go:282] 0 containers: []
	W0630 15:52:13.246246 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:13.246254 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:13.246336 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:13.303158 1612198 cri.go:89] found id: ""
	I0630 15:52:13.303201 1612198 logs.go:282] 0 containers: []
	W0630 15:52:13.303214 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:13.303223 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:13.303288 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:13.353127 1612198 cri.go:89] found id: ""
	I0630 15:52:13.353151 1612198 logs.go:282] 0 containers: []
	W0630 15:52:13.353158 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:13.353164 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:13.353207 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:13.403503 1612198 cri.go:89] found id: ""
	I0630 15:52:13.403539 1612198 logs.go:282] 0 containers: []
	W0630 15:52:13.403552 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:13.403569 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:13.403586 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:13.423865 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:13.423909 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:13.521482 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:13.521515 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:13.521535 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:13.636898 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:13.636946 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:13.770133 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:13.770177 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:16.367392 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:16.389872 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:16.389946 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:16.431020 1612198 cri.go:89] found id: ""
	I0630 15:52:16.431056 1612198 logs.go:282] 0 containers: []
	W0630 15:52:16.431070 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:16.431080 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:16.431153 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:16.480430 1612198 cri.go:89] found id: ""
	I0630 15:52:16.480469 1612198 logs.go:282] 0 containers: []
	W0630 15:52:16.480481 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:16.480490 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:16.480553 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:16.535576 1612198 cri.go:89] found id: ""
	I0630 15:52:16.535614 1612198 logs.go:282] 0 containers: []
	W0630 15:52:16.535624 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:16.535632 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:16.535707 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:16.581073 1612198 cri.go:89] found id: ""
	I0630 15:52:16.581101 1612198 logs.go:282] 0 containers: []
	W0630 15:52:16.581109 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:16.581116 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:16.581176 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:16.628001 1612198 cri.go:89] found id: ""
	I0630 15:52:16.628034 1612198 logs.go:282] 0 containers: []
	W0630 15:52:16.628045 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:16.628053 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:16.628134 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:16.683595 1612198 cri.go:89] found id: ""
	I0630 15:52:16.683626 1612198 logs.go:282] 0 containers: []
	W0630 15:52:16.683636 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:16.683645 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:16.683712 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:16.732593 1612198 cri.go:89] found id: ""
	I0630 15:52:16.732630 1612198 logs.go:282] 0 containers: []
	W0630 15:52:16.732641 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:16.732649 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:16.732725 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:16.789805 1612198 cri.go:89] found id: ""
	I0630 15:52:16.789840 1612198 logs.go:282] 0 containers: []
	W0630 15:52:16.789851 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:16.789864 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:16.789880 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:16.916519 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:16.916568 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:16.977865 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:16.977899 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:17.055490 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:17.055555 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:17.082576 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:17.082723 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:17.190881 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:19.691753 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:19.717971 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:19.718066 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:19.787360 1612198 cri.go:89] found id: ""
	I0630 15:52:19.787397 1612198 logs.go:282] 0 containers: []
	W0630 15:52:19.787410 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:19.787421 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:19.787492 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:19.829591 1612198 cri.go:89] found id: ""
	I0630 15:52:19.829620 1612198 logs.go:282] 0 containers: []
	W0630 15:52:19.829636 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:19.829644 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:19.829709 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:19.876706 1612198 cri.go:89] found id: ""
	I0630 15:52:19.876734 1612198 logs.go:282] 0 containers: []
	W0630 15:52:19.876745 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:19.876752 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:19.876824 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:19.916677 1612198 cri.go:89] found id: ""
	I0630 15:52:19.916702 1612198 logs.go:282] 0 containers: []
	W0630 15:52:19.916710 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:19.916716 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:19.916774 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:19.958515 1612198 cri.go:89] found id: ""
	I0630 15:52:19.958546 1612198 logs.go:282] 0 containers: []
	W0630 15:52:19.958558 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:19.958566 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:19.958640 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:19.997769 1612198 cri.go:89] found id: ""
	I0630 15:52:19.997802 1612198 logs.go:282] 0 containers: []
	W0630 15:52:19.997812 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:19.997821 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:19.997892 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:20.032613 1612198 cri.go:89] found id: ""
	I0630 15:52:20.032646 1612198 logs.go:282] 0 containers: []
	W0630 15:52:20.032659 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:20.032666 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:20.032735 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:20.079203 1612198 cri.go:89] found id: ""
	I0630 15:52:20.079236 1612198 logs.go:282] 0 containers: []
	W0630 15:52:20.079247 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:20.079260 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:20.079277 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:20.196237 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:20.196272 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:20.244570 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:20.244600 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:20.311185 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:20.311232 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:20.327673 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:20.327714 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:20.412299 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:22.912510 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:22.932055 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:22.932228 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:22.979364 1612198 cri.go:89] found id: ""
	I0630 15:52:22.979398 1612198 logs.go:282] 0 containers: []
	W0630 15:52:22.979409 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:22.979417 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:22.979501 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:23.028402 1612198 cri.go:89] found id: ""
	I0630 15:52:23.028432 1612198 logs.go:282] 0 containers: []
	W0630 15:52:23.028441 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:23.028448 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:23.028506 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:23.070793 1612198 cri.go:89] found id: ""
	I0630 15:52:23.070827 1612198 logs.go:282] 0 containers: []
	W0630 15:52:23.070841 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:23.070849 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:23.070907 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:23.111809 1612198 cri.go:89] found id: ""
	I0630 15:52:23.111845 1612198 logs.go:282] 0 containers: []
	W0630 15:52:23.111858 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:23.111868 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:23.111964 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:23.163566 1612198 cri.go:89] found id: ""
	I0630 15:52:23.163598 1612198 logs.go:282] 0 containers: []
	W0630 15:52:23.163610 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:23.163620 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:23.163688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:23.208735 1612198 cri.go:89] found id: ""
	I0630 15:52:23.208770 1612198 logs.go:282] 0 containers: []
	W0630 15:52:23.208781 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:23.208792 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:23.208872 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:23.251794 1612198 cri.go:89] found id: ""
	I0630 15:52:23.251840 1612198 logs.go:282] 0 containers: []
	W0630 15:52:23.251855 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:23.251864 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:23.251939 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:23.295817 1612198 cri.go:89] found id: ""
	I0630 15:52:23.295877 1612198 logs.go:282] 0 containers: []
	W0630 15:52:23.295891 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:23.295906 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:23.295926 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:23.375370 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:23.375468 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:23.397050 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:23.397120 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:23.497991 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:23.498019 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:23.498036 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:23.583966 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:23.584020 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:26.141088 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:26.160965 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:26.161031 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:26.208201 1612198 cri.go:89] found id: ""
	I0630 15:52:26.208241 1612198 logs.go:282] 0 containers: []
	W0630 15:52:26.208251 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:26.208259 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:26.208349 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:26.256083 1612198 cri.go:89] found id: ""
	I0630 15:52:26.256113 1612198 logs.go:282] 0 containers: []
	W0630 15:52:26.256122 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:26.256129 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:26.256226 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:26.304202 1612198 cri.go:89] found id: ""
	I0630 15:52:26.304229 1612198 logs.go:282] 0 containers: []
	W0630 15:52:26.304237 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:26.304251 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:26.304318 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:26.345889 1612198 cri.go:89] found id: ""
	I0630 15:52:26.345925 1612198 logs.go:282] 0 containers: []
	W0630 15:52:26.345936 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:26.345945 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:26.346015 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:26.383384 1612198 cri.go:89] found id: ""
	I0630 15:52:26.383444 1612198 logs.go:282] 0 containers: []
	W0630 15:52:26.383459 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:26.383483 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:26.383569 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:26.422062 1612198 cri.go:89] found id: ""
	I0630 15:52:26.422096 1612198 logs.go:282] 0 containers: []
	W0630 15:52:26.422108 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:26.422117 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:26.422196 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:26.463080 1612198 cri.go:89] found id: ""
	I0630 15:52:26.463114 1612198 logs.go:282] 0 containers: []
	W0630 15:52:26.463123 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:26.463130 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:26.463189 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:26.510367 1612198 cri.go:89] found id: ""
	I0630 15:52:26.510400 1612198 logs.go:282] 0 containers: []
	W0630 15:52:26.510418 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:26.510431 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:26.510447 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:26.592650 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:26.592684 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:26.592703 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:26.675952 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:26.675995 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:26.716703 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:26.716748 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:26.768180 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:26.768222 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:29.292059 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:29.311485 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:29.311560 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:29.351212 1612198 cri.go:89] found id: ""
	I0630 15:52:29.351243 1612198 logs.go:282] 0 containers: []
	W0630 15:52:29.351256 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:29.351265 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:29.351320 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:29.389810 1612198 cri.go:89] found id: ""
	I0630 15:52:29.389848 1612198 logs.go:282] 0 containers: []
	W0630 15:52:29.389862 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:29.389872 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:29.389949 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:29.438436 1612198 cri.go:89] found id: ""
	I0630 15:52:29.438468 1612198 logs.go:282] 0 containers: []
	W0630 15:52:29.438480 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:29.438487 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:29.438558 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:29.487543 1612198 cri.go:89] found id: ""
	I0630 15:52:29.487578 1612198 logs.go:282] 0 containers: []
	W0630 15:52:29.487593 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:29.487603 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:29.487683 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:29.527951 1612198 cri.go:89] found id: ""
	I0630 15:52:29.527998 1612198 logs.go:282] 0 containers: []
	W0630 15:52:29.528013 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:29.528025 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:29.528112 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:29.572879 1612198 cri.go:89] found id: ""
	I0630 15:52:29.572908 1612198 logs.go:282] 0 containers: []
	W0630 15:52:29.572920 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:29.572928 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:29.572998 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:29.618458 1612198 cri.go:89] found id: ""
	I0630 15:52:29.618496 1612198 logs.go:282] 0 containers: []
	W0630 15:52:29.618508 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:29.618516 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:29.618591 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:29.657969 1612198 cri.go:89] found id: ""
	I0630 15:52:29.657997 1612198 logs.go:282] 0 containers: []
	W0630 15:52:29.658005 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:29.658015 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:29.658029 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:29.713826 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:29.713874 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:29.729350 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:29.729392 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:29.816144 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:29.816170 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:29.816185 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:29.900218 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:29.900262 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:32.462236 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:32.484978 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:32.485075 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:32.548888 1612198 cri.go:89] found id: ""
	I0630 15:52:32.548925 1612198 logs.go:282] 0 containers: []
	W0630 15:52:32.548938 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:32.548947 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:32.549018 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:32.601597 1612198 cri.go:89] found id: ""
	I0630 15:52:32.601633 1612198 logs.go:282] 0 containers: []
	W0630 15:52:32.601645 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:32.601653 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:32.601729 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:32.652043 1612198 cri.go:89] found id: ""
	I0630 15:52:32.652089 1612198 logs.go:282] 0 containers: []
	W0630 15:52:32.652101 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:32.652109 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:32.652177 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:32.702970 1612198 cri.go:89] found id: ""
	I0630 15:52:32.703002 1612198 logs.go:282] 0 containers: []
	W0630 15:52:32.703014 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:32.703022 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:32.703097 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:32.757032 1612198 cri.go:89] found id: ""
	I0630 15:52:32.757068 1612198 logs.go:282] 0 containers: []
	W0630 15:52:32.757079 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:32.757089 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:32.757161 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:32.818488 1612198 cri.go:89] found id: ""
	I0630 15:52:32.818517 1612198 logs.go:282] 0 containers: []
	W0630 15:52:32.818525 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:32.818533 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:32.818587 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:32.884695 1612198 cri.go:89] found id: ""
	I0630 15:52:32.884732 1612198 logs.go:282] 0 containers: []
	W0630 15:52:32.884749 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:32.884758 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:32.884815 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:32.946048 1612198 cri.go:89] found id: ""
	I0630 15:52:32.946093 1612198 logs.go:282] 0 containers: []
	W0630 15:52:32.946105 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:32.946118 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:32.946134 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:33.025072 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:33.025130 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:33.044421 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:33.044452 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:33.144621 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:33.144645 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:33.144657 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:33.265229 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:33.265289 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:35.814606 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:35.831730 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:35.831801 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:35.876845 1612198 cri.go:89] found id: ""
	I0630 15:52:35.876890 1612198 logs.go:282] 0 containers: []
	W0630 15:52:35.876903 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:35.876913 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:35.877002 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:35.927108 1612198 cri.go:89] found id: ""
	I0630 15:52:35.927143 1612198 logs.go:282] 0 containers: []
	W0630 15:52:35.927154 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:35.927162 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:35.927225 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:35.968003 1612198 cri.go:89] found id: ""
	I0630 15:52:35.968034 1612198 logs.go:282] 0 containers: []
	W0630 15:52:35.968045 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:35.968053 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:35.968118 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:36.010675 1612198 cri.go:89] found id: ""
	I0630 15:52:36.010708 1612198 logs.go:282] 0 containers: []
	W0630 15:52:36.010723 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:36.010731 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:36.010798 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:36.063986 1612198 cri.go:89] found id: ""
	I0630 15:52:36.064023 1612198 logs.go:282] 0 containers: []
	W0630 15:52:36.064036 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:36.064045 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:36.064115 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:36.107308 1612198 cri.go:89] found id: ""
	I0630 15:52:36.107348 1612198 logs.go:282] 0 containers: []
	W0630 15:52:36.107362 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:36.107372 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:36.107458 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:36.148628 1612198 cri.go:89] found id: ""
	I0630 15:52:36.148677 1612198 logs.go:282] 0 containers: []
	W0630 15:52:36.148688 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:36.148697 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:36.148772 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:36.196306 1612198 cri.go:89] found id: ""
	I0630 15:52:36.196343 1612198 logs.go:282] 0 containers: []
	W0630 15:52:36.196354 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:36.196366 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:36.196384 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:36.254691 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:36.254736 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:36.276908 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:36.276950 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:36.367965 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:36.367990 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:36.368004 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:36.475338 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:36.475409 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:39.031636 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:39.048514 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:39.048585 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:39.091286 1612198 cri.go:89] found id: ""
	I0630 15:52:39.091317 1612198 logs.go:282] 0 containers: []
	W0630 15:52:39.091327 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:39.091334 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:39.091405 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:39.134767 1612198 cri.go:89] found id: ""
	I0630 15:52:39.134806 1612198 logs.go:282] 0 containers: []
	W0630 15:52:39.134815 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:39.134821 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:39.134878 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:39.180851 1612198 cri.go:89] found id: ""
	I0630 15:52:39.180881 1612198 logs.go:282] 0 containers: []
	W0630 15:52:39.180890 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:39.180899 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:39.180970 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:39.220316 1612198 cri.go:89] found id: ""
	I0630 15:52:39.220372 1612198 logs.go:282] 0 containers: []
	W0630 15:52:39.220387 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:39.220397 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:39.220492 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:39.267431 1612198 cri.go:89] found id: ""
	I0630 15:52:39.267474 1612198 logs.go:282] 0 containers: []
	W0630 15:52:39.267486 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:39.267494 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:39.267563 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:39.314025 1612198 cri.go:89] found id: ""
	I0630 15:52:39.314061 1612198 logs.go:282] 0 containers: []
	W0630 15:52:39.314071 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:39.314080 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:39.314147 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:39.352775 1612198 cri.go:89] found id: ""
	I0630 15:52:39.352817 1612198 logs.go:282] 0 containers: []
	W0630 15:52:39.352829 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:39.352837 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:39.352894 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:39.390784 1612198 cri.go:89] found id: ""
	I0630 15:52:39.390811 1612198 logs.go:282] 0 containers: []
	W0630 15:52:39.390819 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:39.390829 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:39.390842 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:39.444689 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:39.444735 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:39.465820 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:39.465866 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:39.564325 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:39.564348 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:39.564362 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:39.657350 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:39.657394 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:42.202419 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:42.221856 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:42.221942 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:42.273521 1612198 cri.go:89] found id: ""
	I0630 15:52:42.273556 1612198 logs.go:282] 0 containers: []
	W0630 15:52:42.273568 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:42.273577 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:42.273648 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:42.314979 1612198 cri.go:89] found id: ""
	I0630 15:52:42.315010 1612198 logs.go:282] 0 containers: []
	W0630 15:52:42.315018 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:42.315026 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:42.315094 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:42.357211 1612198 cri.go:89] found id: ""
	I0630 15:52:42.357257 1612198 logs.go:282] 0 containers: []
	W0630 15:52:42.357269 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:42.357280 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:42.357360 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:42.394991 1612198 cri.go:89] found id: ""
	I0630 15:52:42.395022 1612198 logs.go:282] 0 containers: []
	W0630 15:52:42.395033 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:42.395041 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:42.395112 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:42.434223 1612198 cri.go:89] found id: ""
	I0630 15:52:42.434254 1612198 logs.go:282] 0 containers: []
	W0630 15:52:42.434265 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:42.434274 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:42.434344 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:42.474381 1612198 cri.go:89] found id: ""
	I0630 15:52:42.474419 1612198 logs.go:282] 0 containers: []
	W0630 15:52:42.474434 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:42.474444 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:42.474518 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:42.513848 1612198 cri.go:89] found id: ""
	I0630 15:52:42.513879 1612198 logs.go:282] 0 containers: []
	W0630 15:52:42.513890 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:42.513898 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:42.513958 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:42.552723 1612198 cri.go:89] found id: ""
	I0630 15:52:42.552752 1612198 logs.go:282] 0 containers: []
	W0630 15:52:42.552759 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:42.552769 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:42.552783 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:42.623364 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:42.623425 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:42.638223 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:42.638263 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:42.720632 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:42.720657 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:42.720672 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:42.805318 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:42.805369 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:45.356097 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:45.375177 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:45.375249 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:45.411531 1612198 cri.go:89] found id: ""
	I0630 15:52:45.411573 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.411585 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:45.411594 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:45.411670 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:45.446010 1612198 cri.go:89] found id: ""
	I0630 15:52:45.446040 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.446049 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:45.446055 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:45.446126 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:45.483165 1612198 cri.go:89] found id: ""
	I0630 15:52:45.483213 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.483225 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:45.483234 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:45.483309 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:45.519693 1612198 cri.go:89] found id: ""
	I0630 15:52:45.519724 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.519732 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:45.519739 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:45.519813 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:45.554863 1612198 cri.go:89] found id: ""
	I0630 15:52:45.554902 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.554913 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:45.554921 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:45.555000 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:45.590429 1612198 cri.go:89] found id: ""
	I0630 15:52:45.590460 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.590469 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:45.590476 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:45.590545 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:45.625876 1612198 cri.go:89] found id: ""
	I0630 15:52:45.625914 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.625927 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:45.625935 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:45.626002 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:45.663157 1612198 cri.go:89] found id: ""
	I0630 15:52:45.663188 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.663197 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:45.663210 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:45.663227 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:45.717765 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:45.717817 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:45.731782 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:45.731815 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:45.798057 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:45.798090 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:45.798106 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:45.878867 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:45.878917 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:48.422047 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:48.441634 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:48.441712 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:48.482676 1612198 cri.go:89] found id: ""
	I0630 15:52:48.482706 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.482714 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:48.482721 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:48.482781 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:48.523604 1612198 cri.go:89] found id: ""
	I0630 15:52:48.523645 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.523659 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:48.523669 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:48.523740 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:48.566545 1612198 cri.go:89] found id: ""
	I0630 15:52:48.566576 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.566588 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:48.566595 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:48.566667 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:48.602166 1612198 cri.go:89] found id: ""
	I0630 15:52:48.602204 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.602219 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:48.602228 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:48.602296 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:48.645664 1612198 cri.go:89] found id: ""
	I0630 15:52:48.645701 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.645712 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:48.645724 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:48.645796 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:48.689364 1612198 cri.go:89] found id: ""
	I0630 15:52:48.689437 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.689449 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:48.689457 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:48.689532 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:48.727484 1612198 cri.go:89] found id: ""
	I0630 15:52:48.727594 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.727614 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:48.727623 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:48.727695 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:48.765617 1612198 cri.go:89] found id: ""
	I0630 15:52:48.765649 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.765662 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:48.765676 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:48.765696 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:48.832480 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:48.832525 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:48.851001 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:48.851033 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:48.935090 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:48.935117 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:48.935139 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:49.020511 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:49.020556 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:51.569582 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:51.586531 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:51.586608 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:51.623986 1612198 cri.go:89] found id: ""
	I0630 15:52:51.624022 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.624034 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:51.624041 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:51.624097 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:51.660234 1612198 cri.go:89] found id: ""
	I0630 15:52:51.660289 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.660311 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:51.660321 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:51.660396 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:51.694392 1612198 cri.go:89] found id: ""
	I0630 15:52:51.694421 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.694431 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:51.694439 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:51.694509 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:51.733636 1612198 cri.go:89] found id: ""
	I0630 15:52:51.733679 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.733692 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:51.733700 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:51.733767 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:51.770073 1612198 cri.go:89] found id: ""
	I0630 15:52:51.770105 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.770116 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:51.770125 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:51.770193 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:51.806054 1612198 cri.go:89] found id: ""
	I0630 15:52:51.806082 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.806096 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:51.806105 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:51.806166 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:51.844220 1612198 cri.go:89] found id: ""
	I0630 15:52:51.844253 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.844263 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:51.844270 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:51.844337 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:51.879139 1612198 cri.go:89] found id: ""
	I0630 15:52:51.879180 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.879192 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:51.879206 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:51.879225 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:51.959131 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:51.959178 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:51.999852 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:51.999898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:52.054538 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:52.054586 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:52.068544 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:52.068582 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:52.141184 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:54.642061 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:54.657561 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:54.657631 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:54.699127 1612198 cri.go:89] found id: ""
	I0630 15:52:54.699156 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.699165 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:54.699172 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:54.699249 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:54.743537 1612198 cri.go:89] found id: ""
	I0630 15:52:54.743582 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.743595 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:54.743604 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:54.743691 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:54.793655 1612198 cri.go:89] found id: ""
	I0630 15:52:54.793692 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.793705 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:54.793714 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:54.793789 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:54.836404 1612198 cri.go:89] found id: ""
	I0630 15:52:54.836439 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.836450 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:54.836458 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:54.836530 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:54.881834 1612198 cri.go:89] found id: ""
	I0630 15:52:54.881866 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.881874 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:54.881881 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:54.881945 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:54.920907 1612198 cri.go:89] found id: ""
	I0630 15:52:54.920937 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.920945 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:54.920952 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:54.921019 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:54.964724 1612198 cri.go:89] found id: ""
	I0630 15:52:54.964777 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.964790 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:54.964799 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:54.964877 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:55.000611 1612198 cri.go:89] found id: ""
	I0630 15:52:55.000646 1612198 logs.go:282] 0 containers: []
	W0630 15:52:55.000654 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:55.000665 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:55.000678 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:55.075252 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:55.075285 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:55.075306 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:55.162081 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:55.162133 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:55.226240 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:55.226277 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:55.297365 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:55.297429 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:57.821154 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:57.853607 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:57.853696 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:57.914164 1612198 cri.go:89] found id: ""
	I0630 15:52:57.914210 1612198 logs.go:282] 0 containers: []
	W0630 15:52:57.914227 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:57.914246 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:57.914347 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:57.987318 1612198 cri.go:89] found id: ""
	I0630 15:52:57.987351 1612198 logs.go:282] 0 containers: []
	W0630 15:52:57.987366 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:57.987377 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:57.987457 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:58.079419 1612198 cri.go:89] found id: ""
	I0630 15:52:58.079447 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.079455 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:58.079462 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:58.079527 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:58.159322 1612198 cri.go:89] found id: ""
	I0630 15:52:58.159364 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.159376 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:58.159385 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:58.159456 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:58.214549 1612198 cri.go:89] found id: ""
	I0630 15:52:58.214589 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.214605 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:58.214614 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:58.214688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:58.268709 1612198 cri.go:89] found id: ""
	I0630 15:52:58.268743 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.268755 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:58.268764 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:58.268865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:58.336282 1612198 cri.go:89] found id: ""
	I0630 15:52:58.336316 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.336327 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:58.336335 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:58.336411 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:58.385539 1612198 cri.go:89] found id: ""
	I0630 15:52:58.385568 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.385577 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:58.385587 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:58.385600 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:58.490925 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:58.490953 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:58.490966 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:58.595534 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:58.595636 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:58.670912 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:58.670947 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:58.746686 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:58.746777 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:01.264137 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:01.286226 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:01.286330 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:01.365280 1612198 cri.go:89] found id: ""
	I0630 15:53:01.365314 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.365328 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:01.365336 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:01.365446 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:01.416551 1612198 cri.go:89] found id: ""
	I0630 15:53:01.416609 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.416628 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:01.416639 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:01.416760 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:01.466901 1612198 cri.go:89] found id: ""
	I0630 15:53:01.466951 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.466968 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:01.466992 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:01.467076 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:01.515958 1612198 cri.go:89] found id: ""
	I0630 15:53:01.516004 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.516018 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:01.516026 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:01.516100 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:01.556162 1612198 cri.go:89] found id: ""
	I0630 15:53:01.556199 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.556212 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:01.556220 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:01.556294 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:01.596633 1612198 cri.go:89] found id: ""
	I0630 15:53:01.596668 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.596681 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:01.596701 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:01.596767 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:01.643515 1612198 cri.go:89] found id: ""
	I0630 15:53:01.643544 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.643553 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:01.643560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:01.643623 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:01.688673 1612198 cri.go:89] found id: ""
	I0630 15:53:01.688716 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.688730 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:01.688746 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:01.688763 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:01.732854 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:01.732887 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:01.792838 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:01.792898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:01.809743 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:01.809803 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:01.893975 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:01.894006 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:01.894020 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:04.474834 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:04.495812 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:04.495894 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:04.545620 1612198 cri.go:89] found id: ""
	I0630 15:53:04.545652 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.545664 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:04.545674 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:04.545819 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:04.595168 1612198 cri.go:89] found id: ""
	I0630 15:53:04.595303 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.595325 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:04.595339 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:04.595423 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:04.648158 1612198 cri.go:89] found id: ""
	I0630 15:53:04.648189 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.648201 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:04.648210 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:04.648279 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:04.695407 1612198 cri.go:89] found id: ""
	I0630 15:53:04.695441 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.695452 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:04.695460 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:04.695525 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:04.745024 1612198 cri.go:89] found id: ""
	I0630 15:53:04.745059 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.745072 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:04.745079 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:04.745147 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:04.784238 1612198 cri.go:89] found id: ""
	I0630 15:53:04.784278 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.784291 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:04.784301 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:04.784375 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:04.828921 1612198 cri.go:89] found id: ""
	I0630 15:53:04.828962 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.828976 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:04.828986 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:04.829058 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:04.878950 1612198 cri.go:89] found id: ""
	I0630 15:53:04.878980 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.878992 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:04.879004 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:04.879021 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:04.898852 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:04.898883 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:04.994919 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:04.994955 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:04.994971 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:05.081838 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:05.081891 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:05.134599 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:05.134639 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:07.707840 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:07.724492 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:07.724584 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:07.764489 1612198 cri.go:89] found id: ""
	I0630 15:53:07.764533 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.764545 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:07.764553 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:07.764641 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:07.813734 1612198 cri.go:89] found id: ""
	I0630 15:53:07.813762 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.813771 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:07.813777 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:07.813838 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:07.866385 1612198 cri.go:89] found id: ""
	I0630 15:53:07.866412 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.866420 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:07.866426 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:07.866480 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:07.913274 1612198 cri.go:89] found id: ""
	I0630 15:53:07.913307 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.913317 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:07.913325 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:07.913394 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:07.966418 1612198 cri.go:89] found id: ""
	I0630 15:53:07.966461 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.966475 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:07.966484 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:07.966554 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:08.017379 1612198 cri.go:89] found id: ""
	I0630 15:53:08.017443 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.017457 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:08.017465 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:08.017559 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:08.070396 1612198 cri.go:89] found id: ""
	I0630 15:53:08.070427 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.070440 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:08.070449 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:08.070519 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:08.118074 1612198 cri.go:89] found id: ""
	I0630 15:53:08.118118 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.118132 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:08.118146 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:08.118164 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:08.139695 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:08.139728 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:08.252659 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:08.252683 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:08.252698 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:08.381553 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:08.381602 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:08.448865 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:08.448912 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:11.032838 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:11.059173 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:11.059251 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:11.115790 1612198 cri.go:89] found id: ""
	I0630 15:53:11.115826 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.115839 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:11.115848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:11.115920 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:11.175246 1612198 cri.go:89] found id: ""
	I0630 15:53:11.175295 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.175307 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:11.175316 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:11.175389 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:11.230317 1612198 cri.go:89] found id: ""
	I0630 15:53:11.230349 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.230360 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:11.230368 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:11.230437 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:11.283786 1612198 cri.go:89] found id: ""
	I0630 15:53:11.283827 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.283839 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:11.283848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:11.283927 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:11.334412 1612198 cri.go:89] found id: ""
	I0630 15:53:11.334437 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.334445 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:11.334451 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:11.334508 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:11.399160 1612198 cri.go:89] found id: ""
	I0630 15:53:11.399195 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.399208 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:11.399218 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:11.399307 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:11.461034 1612198 cri.go:89] found id: ""
	I0630 15:53:11.461065 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.461078 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:11.461087 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:11.461144 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:11.509139 1612198 cri.go:89] found id: ""
	I0630 15:53:11.509169 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.509180 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:11.509194 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:11.509217 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:11.560268 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:11.560316 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:11.616198 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:11.616253 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:11.636775 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:11.636820 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:11.735910 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:11.735936 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:11.735954 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:14.327948 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:14.347007 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:14.347078 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:14.391736 1612198 cri.go:89] found id: ""
	I0630 15:53:14.391770 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.391782 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:14.391790 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:14.391855 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:14.438236 1612198 cri.go:89] found id: ""
	I0630 15:53:14.438274 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.438286 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:14.438294 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:14.438381 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:14.479508 1612198 cri.go:89] found id: ""
	I0630 15:53:14.479539 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.479550 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:14.479558 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:14.479618 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:14.530347 1612198 cri.go:89] found id: ""
	I0630 15:53:14.530386 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.530400 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:14.530409 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:14.530480 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:14.576356 1612198 cri.go:89] found id: ""
	I0630 15:53:14.576392 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.576404 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:14.576413 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:14.576491 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:14.627508 1612198 cri.go:89] found id: ""
	I0630 15:53:14.627546 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.627557 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:14.627565 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:14.627636 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:14.674780 1612198 cri.go:89] found id: ""
	I0630 15:53:14.674808 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.674824 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:14.674832 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:14.674899 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:14.717562 1612198 cri.go:89] found id: ""
	I0630 15:53:14.717599 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.717611 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:14.717624 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:14.717655 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:14.801031 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:14.801063 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:14.801083 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:14.890511 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:14.890559 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:14.953255 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:14.953300 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:15.023105 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:15.023160 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:17.543438 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:17.564446 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:17.564545 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:17.602287 1612198 cri.go:89] found id: ""
	I0630 15:53:17.602336 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.602349 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:17.602358 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:17.602449 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:17.643215 1612198 cri.go:89] found id: ""
	I0630 15:53:17.643246 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.643259 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:17.643266 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:17.643328 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:17.684398 1612198 cri.go:89] found id: ""
	I0630 15:53:17.684474 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.684484 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:17.684493 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:17.684567 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:17.734640 1612198 cri.go:89] found id: ""
	I0630 15:53:17.734681 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.734694 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:17.734702 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:17.734787 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:17.771368 1612198 cri.go:89] found id: ""
	I0630 15:53:17.771404 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.771416 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:17.771425 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:17.771497 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:17.828694 1612198 cri.go:89] found id: ""
	I0630 15:53:17.828724 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.828732 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:17.828741 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:17.828815 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:17.870487 1612198 cri.go:89] found id: ""
	I0630 15:53:17.870535 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.870549 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:17.870558 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:17.870639 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:17.907397 1612198 cri.go:89] found id: ""
	I0630 15:53:17.907430 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.907440 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:17.907451 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:17.907464 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:17.983887 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:17.983934 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:18.027406 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:18.027439 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:18.079092 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:18.079140 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:18.094309 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:18.094345 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:18.168726 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:20.669207 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:20.688479 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:20.688575 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:20.729290 1612198 cri.go:89] found id: ""
	I0630 15:53:20.729317 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.729327 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:20.729334 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:20.729399 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:20.772585 1612198 cri.go:89] found id: ""
	I0630 15:53:20.772606 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.772638 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:20.772647 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:20.772704 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:20.815369 1612198 cri.go:89] found id: ""
	I0630 15:53:20.815407 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.815419 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:20.815428 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:20.815490 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:20.856251 1612198 cri.go:89] found id: ""
	I0630 15:53:20.856282 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.856294 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:20.856304 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:20.856371 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:20.895690 1612198 cri.go:89] found id: ""
	I0630 15:53:20.895723 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.895732 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:20.895743 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:20.895823 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:20.938040 1612198 cri.go:89] found id: ""
	I0630 15:53:20.938075 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.938085 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:20.938094 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:20.938163 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:20.983241 1612198 cri.go:89] found id: ""
	I0630 15:53:20.983280 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.983293 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:20.983302 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:20.983373 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:21.029599 1612198 cri.go:89] found id: ""
	I0630 15:53:21.029633 1612198 logs.go:282] 0 containers: []
	W0630 15:53:21.029645 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:21.029659 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:21.029675 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:21.115729 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:21.115753 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:21.115766 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:21.192780 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:21.192824 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:21.238081 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:21.238141 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:21.298363 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:21.298437 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:23.816993 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:23.835380 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:23.835460 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:23.877562 1612198 cri.go:89] found id: ""
	I0630 15:53:23.877598 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.877610 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:23.877618 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:23.877695 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:23.919089 1612198 cri.go:89] found id: ""
	I0630 15:53:23.919130 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.919144 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:23.919152 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:23.919232 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:23.964835 1612198 cri.go:89] found id: ""
	I0630 15:53:23.964864 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.964875 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:23.964883 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:23.964956 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:24.011639 1612198 cri.go:89] found id: ""
	I0630 15:53:24.011680 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.011694 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:24.011704 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:24.011791 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:24.059206 1612198 cri.go:89] found id: ""
	I0630 15:53:24.059240 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.059250 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:24.059262 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:24.059335 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:24.116479 1612198 cri.go:89] found id: ""
	I0630 15:53:24.116517 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.116530 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:24.116540 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:24.116619 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:24.164108 1612198 cri.go:89] found id: ""
	I0630 15:53:24.164142 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.164153 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:24.164162 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:24.164235 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:24.232264 1612198 cri.go:89] found id: ""
	I0630 15:53:24.232299 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.232312 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:24.232325 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:24.232343 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:24.334546 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:24.334577 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:24.334597 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:24.450906 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:24.450963 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:24.523317 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:24.523361 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:24.609506 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:24.609547 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:27.134042 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:27.156543 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:27.156635 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:27.206777 1612198 cri.go:89] found id: ""
	I0630 15:53:27.206819 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.206831 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:27.206841 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:27.206924 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:27.257098 1612198 cri.go:89] found id: ""
	I0630 15:53:27.257141 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.257153 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:27.257162 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:27.257226 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:27.311101 1612198 cri.go:89] found id: ""
	I0630 15:53:27.311129 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.311137 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:27.311164 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:27.311233 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:27.356225 1612198 cri.go:89] found id: ""
	I0630 15:53:27.356264 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.356276 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:27.356285 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:27.356446 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:27.408114 1612198 cri.go:89] found id: ""
	I0630 15:53:27.408173 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.408185 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:27.408194 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:27.408264 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:27.453433 1612198 cri.go:89] found id: ""
	I0630 15:53:27.453471 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.453483 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:27.453491 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:27.453560 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:27.502170 1612198 cri.go:89] found id: ""
	I0630 15:53:27.502209 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.502222 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:27.502230 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:27.502304 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:27.539066 1612198 cri.go:89] found id: ""
	I0630 15:53:27.539104 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.539113 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:27.539124 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:27.539157 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:27.557767 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:27.557807 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:27.661895 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:27.661924 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:27.661943 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:27.767088 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:27.767156 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:27.814647 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:27.814683 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:30.372878 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:30.392885 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:30.392993 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:30.450197 1612198 cri.go:89] found id: ""
	I0630 15:53:30.450235 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.450248 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:30.450258 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:30.450342 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:30.507009 1612198 cri.go:89] found id: ""
	I0630 15:53:30.507041 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.507051 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:30.507060 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:30.507147 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:30.554455 1612198 cri.go:89] found id: ""
	I0630 15:53:30.554485 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.554496 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:30.554505 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:30.554572 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:30.598785 1612198 cri.go:89] found id: ""
	I0630 15:53:30.598821 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.598833 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:30.598841 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:30.598911 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:30.634661 1612198 cri.go:89] found id: ""
	I0630 15:53:30.634701 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.634713 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:30.634722 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:30.634794 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:30.674870 1612198 cri.go:89] found id: ""
	I0630 15:53:30.674903 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.674913 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:30.674922 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:30.674984 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:30.715843 1612198 cri.go:89] found id: ""
	I0630 15:53:30.715873 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.715882 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:30.715889 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:30.715947 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:30.752318 1612198 cri.go:89] found id: ""
	I0630 15:53:30.752356 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.752375 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:30.752390 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:30.752406 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:30.824741 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:30.824784 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:30.838605 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:30.838640 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:30.915839 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:30.915924 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:30.915959 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:30.999770 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:30.999820 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:33.553483 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:33.570047 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:33.570150 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:33.616739 1612198 cri.go:89] found id: ""
	I0630 15:53:33.616775 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.616788 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:33.616798 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:33.616865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:33.659234 1612198 cri.go:89] found id: ""
	I0630 15:53:33.659265 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.659277 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:33.659285 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:33.659353 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:33.697938 1612198 cri.go:89] found id: ""
	I0630 15:53:33.697977 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.697989 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:33.697997 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:33.698115 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:33.739043 1612198 cri.go:89] found id: ""
	I0630 15:53:33.739104 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.739118 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:33.739127 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:33.739200 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:33.781947 1612198 cri.go:89] found id: ""
	I0630 15:53:33.781983 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.781994 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:33.782006 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:33.782078 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:33.818201 1612198 cri.go:89] found id: ""
	I0630 15:53:33.818241 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.818254 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:33.818264 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:33.818336 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:33.865630 1612198 cri.go:89] found id: ""
	I0630 15:53:33.865767 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.865806 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:33.865851 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:33.865966 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:33.905740 1612198 cri.go:89] found id: ""
	I0630 15:53:33.905807 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.905821 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:33.905834 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:33.905852 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:33.978403 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:33.978451 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:34.000180 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:34.000225 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:34.077381 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:34.077433 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:34.077451 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:34.158516 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:34.158571 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:36.703046 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:36.725942 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:36.726033 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:36.769910 1612198 cri.go:89] found id: ""
	I0630 15:53:36.770040 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.770066 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:36.770075 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:36.770150 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:36.817303 1612198 cri.go:89] found id: ""
	I0630 15:53:36.817339 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.817350 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:36.817358 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:36.817442 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:36.852676 1612198 cri.go:89] found id: ""
	I0630 15:53:36.852721 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.852734 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:36.852743 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:36.852811 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:36.896796 1612198 cri.go:89] found id: ""
	I0630 15:53:36.896829 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.896840 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:36.896848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:36.896929 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:36.932669 1612198 cri.go:89] found id: ""
	I0630 15:53:36.932708 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.932720 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:36.932729 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:36.932810 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:36.972728 1612198 cri.go:89] found id: ""
	I0630 15:53:36.972762 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.972773 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:36.972781 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:36.972855 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:37.009554 1612198 cri.go:89] found id: ""
	I0630 15:53:37.009594 1612198 logs.go:282] 0 containers: []
	W0630 15:53:37.009605 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:37.009614 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:37.009688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:37.047124 1612198 cri.go:89] found id: ""
	I0630 15:53:37.047163 1612198 logs.go:282] 0 containers: []
	W0630 15:53:37.047175 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:37.047188 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:37.047204 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:37.110372 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:37.110427 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:37.127309 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:37.127352 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:37.196740 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:37.196770 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:37.196793 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:37.284276 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:37.284322 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:39.832609 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:39.849706 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:39.849794 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:39.893352 1612198 cri.go:89] found id: ""
	I0630 15:53:39.893391 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.893433 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:39.893442 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:39.893515 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:39.932840 1612198 cri.go:89] found id: ""
	I0630 15:53:39.932868 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.932876 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:39.932890 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:39.932955 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:39.981060 1612198 cri.go:89] found id: ""
	I0630 15:53:39.981097 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.981109 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:39.981117 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:39.981203 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:40.018727 1612198 cri.go:89] found id: ""
	I0630 15:53:40.018768 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.018781 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:40.018790 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:40.018863 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:40.061585 1612198 cri.go:89] found id: ""
	I0630 15:53:40.061627 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.061640 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:40.061649 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:40.061743 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:40.105417 1612198 cri.go:89] found id: ""
	I0630 15:53:40.105448 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.105456 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:40.105464 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:40.105527 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:40.141656 1612198 cri.go:89] found id: ""
	I0630 15:53:40.141686 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.141697 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:40.141705 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:40.141775 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:40.179978 1612198 cri.go:89] found id: ""
	I0630 15:53:40.180011 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.180020 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:40.180029 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:40.180042 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:40.197879 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:40.197924 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:40.271201 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:40.271257 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:40.271277 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:40.355166 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:40.355211 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:40.408985 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:40.409023 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:42.967786 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:42.987531 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:42.987625 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:43.023328 1612198 cri.go:89] found id: ""
	I0630 15:53:43.023360 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.023370 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:43.023377 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:43.023449 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:43.059730 1612198 cri.go:89] found id: ""
	I0630 15:53:43.059774 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.059785 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:43.059793 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:43.059875 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:43.100987 1612198 cri.go:89] found id: ""
	I0630 15:53:43.101024 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.101036 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:43.101045 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:43.101118 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:43.139556 1612198 cri.go:89] found id: ""
	I0630 15:53:43.139591 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.139603 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:43.139611 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:43.139669 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:43.177647 1612198 cri.go:89] found id: ""
	I0630 15:53:43.177677 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.177686 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:43.177692 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:43.177749 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:43.214354 1612198 cri.go:89] found id: ""
	I0630 15:53:43.214388 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.214400 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:43.214407 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:43.214475 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:43.254332 1612198 cri.go:89] found id: ""
	I0630 15:53:43.254364 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.254376 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:43.254393 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:43.254459 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:43.292194 1612198 cri.go:89] found id: ""
	I0630 15:53:43.292224 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.292232 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:43.292243 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:43.292255 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:43.345690 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:43.345732 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:43.360155 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:43.360191 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:43.441505 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:43.441537 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:43.441554 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:43.527009 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:43.527063 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:46.069596 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:46.092563 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:46.092646 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:46.132093 1612198 cri.go:89] found id: ""
	I0630 15:53:46.132131 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.132144 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:46.132153 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:46.132225 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:46.175509 1612198 cri.go:89] found id: ""
	I0630 15:53:46.175544 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.175556 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:46.175565 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:46.175647 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:46.225442 1612198 cri.go:89] found id: ""
	I0630 15:53:46.225478 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.225490 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:46.225502 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:46.225573 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:46.275070 1612198 cri.go:89] found id: ""
	I0630 15:53:46.275109 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.275122 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:46.275131 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:46.275206 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:46.320084 1612198 cri.go:89] found id: ""
	I0630 15:53:46.320116 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.320126 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:46.320133 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:46.320198 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:46.360602 1612198 cri.go:89] found id: ""
	I0630 15:53:46.360682 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.360699 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:46.360711 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:46.360818 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:46.404187 1612198 cri.go:89] found id: ""
	I0630 15:53:46.404222 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.404231 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:46.404238 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:46.404304 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:46.457761 1612198 cri.go:89] found id: ""
	I0630 15:53:46.457803 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.457820 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:46.457835 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:46.457855 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:46.524526 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:46.524574 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:46.542938 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:46.542974 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:46.620336 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:46.620372 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:46.620386 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:46.706447 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:46.706496 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:49.256833 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:49.276256 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:49.276328 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:49.326292 1612198 cri.go:89] found id: ""
	I0630 15:53:49.326327 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.326339 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:49.326356 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:49.326427 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:49.371428 1612198 cri.go:89] found id: ""
	I0630 15:53:49.371486 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.371496 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:49.371503 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:49.371568 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:49.415763 1612198 cri.go:89] found id: ""
	I0630 15:53:49.415840 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.415855 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:49.415864 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:49.415927 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:49.456276 1612198 cri.go:89] found id: ""
	I0630 15:53:49.456313 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.456324 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:49.456332 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:49.456421 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:49.496696 1612198 cri.go:89] found id: ""
	I0630 15:53:49.496735 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.496753 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:49.496762 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:49.496819 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:49.537728 1612198 cri.go:89] found id: ""
	I0630 15:53:49.537763 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.537771 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:49.537778 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:49.537837 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:49.575693 1612198 cri.go:89] found id: ""
	I0630 15:53:49.575725 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.575734 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:49.575740 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:49.575795 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:49.617896 1612198 cri.go:89] found id: ""
	I0630 15:53:49.617931 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.617941 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:49.617967 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:49.617986 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:49.668327 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:49.668372 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:49.721223 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:49.721270 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:49.737061 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:49.737094 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:49.814464 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:49.814490 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:49.814503 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:52.393329 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:52.409925 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:52.410010 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:52.446622 1612198 cri.go:89] found id: ""
	I0630 15:53:52.446659 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.446673 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:52.446684 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:52.446769 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:52.493894 1612198 cri.go:89] found id: ""
	I0630 15:53:52.493929 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.493940 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:52.493947 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:52.494012 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:52.530891 1612198 cri.go:89] found id: ""
	I0630 15:53:52.530943 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.530956 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:52.530965 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:52.531141 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:52.569016 1612198 cri.go:89] found id: ""
	I0630 15:53:52.569046 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.569054 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:52.569068 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:52.569144 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:52.607137 1612198 cri.go:89] found id: ""
	I0630 15:53:52.607176 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.607186 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:52.607194 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:52.607264 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:52.655286 1612198 cri.go:89] found id: ""
	I0630 15:53:52.655334 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.655343 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:52.655350 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:52.655420 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:52.693017 1612198 cri.go:89] found id: ""
	I0630 15:53:52.693053 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.693066 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:52.693093 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:52.693156 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:52.729639 1612198 cri.go:89] found id: ""
	I0630 15:53:52.729674 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.729685 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:52.729713 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:52.729731 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:52.744808 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:52.744846 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:52.818006 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:52.818076 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:52.818095 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:52.913720 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:52.913794 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:52.955851 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:52.955898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:55.506514 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:55.523943 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:55.524024 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:55.562846 1612198 cri.go:89] found id: ""
	I0630 15:53:55.562884 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.562893 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:55.562900 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:55.562960 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:55.601862 1612198 cri.go:89] found id: ""
	I0630 15:53:55.601895 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.601907 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:55.601915 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:55.601988 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:55.650904 1612198 cri.go:89] found id: ""
	I0630 15:53:55.650946 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.650958 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:55.650968 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:55.651051 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:55.695050 1612198 cri.go:89] found id: ""
	I0630 15:53:55.695081 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.695089 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:55.695096 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:55.695167 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:55.732863 1612198 cri.go:89] found id: ""
	I0630 15:53:55.732904 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.732917 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:55.732925 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:55.732997 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:55.772221 1612198 cri.go:89] found id: ""
	I0630 15:53:55.772254 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.772265 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:55.772275 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:55.772349 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:55.811091 1612198 cri.go:89] found id: ""
	I0630 15:53:55.811134 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.811146 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:55.811154 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:55.811213 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:55.846273 1612198 cri.go:89] found id: ""
	I0630 15:53:55.846313 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.846338 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:55.846352 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:55.846370 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:55.921797 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:55.921845 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:55.963517 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:55.963553 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:56.023942 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:56.023988 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:56.038647 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:56.038687 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:56.119572 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:58.620232 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:58.638119 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:58.638194 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:58.674101 1612198 cri.go:89] found id: ""
	I0630 15:53:58.674160 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.674175 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:58.674184 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:58.674259 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:58.712115 1612198 cri.go:89] found id: ""
	I0630 15:53:58.712167 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.712179 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:58.712192 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:58.712261 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:58.766961 1612198 cri.go:89] found id: ""
	I0630 15:53:58.767004 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.767016 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:58.767025 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:58.767114 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:58.817233 1612198 cri.go:89] found id: ""
	I0630 15:53:58.817274 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.817286 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:58.817297 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:58.817379 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:58.858728 1612198 cri.go:89] found id: ""
	I0630 15:53:58.858757 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.858774 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:58.858784 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:58.858842 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:58.900041 1612198 cri.go:89] found id: ""
	I0630 15:53:58.900082 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.900094 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:58.900102 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:58.900176 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:58.944995 1612198 cri.go:89] found id: ""
	I0630 15:53:58.945026 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.945037 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:58.945046 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:58.945110 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:58.987156 1612198 cri.go:89] found id: ""
	I0630 15:53:58.987204 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.987216 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:58.987233 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:58.987252 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:59.054774 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:59.054821 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:59.071556 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:59.071601 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:59.144600 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:59.144631 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:59.144644 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:59.218471 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:59.218519 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:01.761632 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:01.781793 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:01.781885 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:01.834337 1612198 cri.go:89] found id: ""
	I0630 15:54:01.834370 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.834381 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:01.834390 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:01.834456 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:01.879488 1612198 cri.go:89] found id: ""
	I0630 15:54:01.879528 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.879542 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:01.879552 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:01.879629 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:01.919612 1612198 cri.go:89] found id: ""
	I0630 15:54:01.919656 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.919671 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:01.919681 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:01.919755 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:01.959025 1612198 cri.go:89] found id: ""
	I0630 15:54:01.959108 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.959118 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:01.959126 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:01.959213 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:02.004157 1612198 cri.go:89] found id: ""
	I0630 15:54:02.004193 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.004207 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:02.004216 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:02.004293 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:02.041453 1612198 cri.go:89] found id: ""
	I0630 15:54:02.041488 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.041496 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:02.041503 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:02.041573 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:02.092760 1612198 cri.go:89] found id: ""
	I0630 15:54:02.092801 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.092814 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:02.092824 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:02.092894 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:02.130937 1612198 cri.go:89] found id: ""
	I0630 15:54:02.130976 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.130985 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:02.130996 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:02.131076 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:02.186285 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:02.186333 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:02.203252 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:02.203283 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:02.274788 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:02.274820 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:02.274836 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:02.354791 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:02.354835 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:04.902714 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:04.922560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:04.922631 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:04.961257 1612198 cri.go:89] found id: ""
	I0630 15:54:04.961291 1612198 logs.go:282] 0 containers: []
	W0630 15:54:04.961302 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:04.961312 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:04.961388 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:04.997894 1612198 cri.go:89] found id: ""
	I0630 15:54:04.997927 1612198 logs.go:282] 0 containers: []
	W0630 15:54:04.997936 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:04.997942 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:04.998007 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:05.038875 1612198 cri.go:89] found id: ""
	I0630 15:54:05.038923 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.038936 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:05.038945 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:05.039035 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:05.080082 1612198 cri.go:89] found id: ""
	I0630 15:54:05.080123 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.080135 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:05.080145 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:05.080205 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:05.117322 1612198 cri.go:89] found id: ""
	I0630 15:54:05.117358 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.117371 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:05.117378 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:05.117469 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:05.172542 1612198 cri.go:89] found id: ""
	I0630 15:54:05.172578 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.172589 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:05.172598 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:05.172666 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:05.220246 1612198 cri.go:89] found id: ""
	I0630 15:54:05.220280 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.220291 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:05.220299 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:05.220365 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:05.279486 1612198 cri.go:89] found id: ""
	I0630 15:54:05.279521 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.279533 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:05.279548 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:05.279564 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:05.341677 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:05.341734 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:05.359513 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:05.359566 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:05.445100 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:05.445128 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:05.445144 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:05.552812 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:05.552883 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:08.098433 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:08.115865 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:08.115985 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:08.155035 1612198 cri.go:89] found id: ""
	I0630 15:54:08.155077 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.155092 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:08.155103 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:08.155173 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:08.192666 1612198 cri.go:89] found id: ""
	I0630 15:54:08.192702 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.192711 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:08.192719 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:08.192791 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:08.234681 1612198 cri.go:89] found id: ""
	I0630 15:54:08.234710 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.234718 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:08.234723 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:08.234782 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:08.271666 1612198 cri.go:89] found id: ""
	I0630 15:54:08.271699 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.271707 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:08.271714 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:08.271769 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:08.309335 1612198 cri.go:89] found id: ""
	I0630 15:54:08.309366 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.309375 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:08.309381 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:08.309471 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:08.351248 1612198 cri.go:89] found id: ""
	I0630 15:54:08.351284 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.351296 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:08.351305 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:08.351384 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:08.386803 1612198 cri.go:89] found id: ""
	I0630 15:54:08.386833 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.386843 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:08.386851 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:08.386922 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:08.434407 1612198 cri.go:89] found id: ""
	I0630 15:54:08.434442 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.434451 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:08.434461 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:08.434474 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:08.510981 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:08.511009 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:08.511028 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:08.590361 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:08.590426 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:08.634603 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:08.634636 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:08.687291 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:08.687339 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:11.202732 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:11.228516 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:11.228589 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:11.307836 1612198 cri.go:89] found id: ""
	I0630 15:54:11.307870 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.307882 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:11.307890 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:11.307973 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:11.359347 1612198 cri.go:89] found id: ""
	I0630 15:54:11.359380 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.359400 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:11.359408 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:11.359467 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:11.414423 1612198 cri.go:89] found id: ""
	I0630 15:54:11.414469 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.414479 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:11.414486 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:11.414549 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:11.457669 1612198 cri.go:89] found id: ""
	I0630 15:54:11.457704 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.457722 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:11.457735 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:11.457804 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:11.511061 1612198 cri.go:89] found id: ""
	I0630 15:54:11.511131 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.511147 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:11.511159 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:11.511345 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:11.557886 1612198 cri.go:89] found id: ""
	I0630 15:54:11.557923 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.557936 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:11.557946 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:11.558014 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:11.603894 1612198 cri.go:89] found id: ""
	I0630 15:54:11.603926 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.603938 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:11.603946 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:11.604016 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:11.652115 1612198 cri.go:89] found id: ""
	I0630 15:54:11.652147 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.652156 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:11.652165 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:11.652177 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:11.700550 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:11.700588 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:11.761044 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:11.761088 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:11.779581 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:11.779669 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:11.872983 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:11.873013 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:11.873040 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:14.469180 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:14.488438 1612198 kubeadm.go:593] duration metric: took 4m4.858627578s to restartPrimaryControlPlane
	W0630 15:54:14.488521 1612198 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0630 15:54:14.488557 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:54:16.362367 1612198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.873774715s)
	I0630 15:54:16.362472 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:54:16.381754 1612198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:54:16.394832 1612198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:54:16.407997 1612198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:54:16.408022 1612198 kubeadm.go:157] found existing configuration files:
	
	I0630 15:54:16.408088 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:54:16.420299 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:54:16.420374 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:54:16.432689 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:54:16.450141 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:54:16.450232 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:54:16.466230 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:54:16.478725 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:54:16.478810 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:54:16.491926 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:54:16.503661 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:54:16.503754 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:54:16.516000 1612198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:54:16.604779 1612198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:54:16.604866 1612198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:54:16.771725 1612198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:54:16.771885 1612198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:54:16.772009 1612198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:54:17.000568 1612198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:54:17.002768 1612198 out.go:235]   - Generating certificates and keys ...
	I0630 15:54:17.007633 1612198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:54:17.007744 1612198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:54:17.007835 1612198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:54:17.007906 1612198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:54:17.007987 1612198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:54:17.008050 1612198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:54:17.008130 1612198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:54:17.008216 1612198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:54:17.008304 1612198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:54:17.008429 1612198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:54:17.008479 1612198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:54:17.008545 1612198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:54:17.091062 1612198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:54:17.216540 1612198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:54:17.314609 1612198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:54:17.399588 1612198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:54:17.417749 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:54:17.418852 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:54:17.418923 1612198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:54:17.631341 1612198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:54:17.633197 1612198 out.go:235]   - Booting up control plane ...
	I0630 15:54:17.633340 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:54:17.639557 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:54:17.642269 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:54:17.646155 1612198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:54:17.647610 1612198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:54:57.647972 1612198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:54:57.648456 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:54:57.648704 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:02.649537 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:02.649775 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:12.650265 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:12.650526 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:32.650986 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:32.651250 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:12.652241 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:12.652569 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:12.652621 1612198 kubeadm.go:310] 
	I0630 15:56:12.652681 1612198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:56:12.652741 1612198 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:56:12.652751 1612198 kubeadm.go:310] 
	I0630 15:56:12.652778 1612198 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:56:12.652814 1612198 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:56:12.652960 1612198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:56:12.652983 1612198 kubeadm.go:310] 
	I0630 15:56:12.653129 1612198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:56:12.653192 1612198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:56:12.653257 1612198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:56:12.653270 1612198 kubeadm.go:310] 
	I0630 15:56:12.653457 1612198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:56:12.653585 1612198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:56:12.653603 1612198 kubeadm.go:310] 
	I0630 15:56:12.653767 1612198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:56:12.653893 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:56:12.654008 1612198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:56:12.654137 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:56:12.654157 1612198 kubeadm.go:310] 
	I0630 15:56:12.655912 1612198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:56:12.655994 1612198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:56:12.656047 1612198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0630 15:56:12.656312 1612198 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0630 15:56:12.656390 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:56:13.118145 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:56:13.137252 1612198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:56:13.148791 1612198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:56:13.148814 1612198 kubeadm.go:157] found existing configuration files:
	
	I0630 15:56:13.148866 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:56:13.159734 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:56:13.159815 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:56:13.170810 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:56:13.181716 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:56:13.181794 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:56:13.193772 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:56:13.204825 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:56:13.204895 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:56:13.216418 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:56:13.227545 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:56:13.227620 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:56:13.239663 1612198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:56:13.314550 1612198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:56:13.314640 1612198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:56:13.462367 1612198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:56:13.462550 1612198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:56:13.462695 1612198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:56:13.649387 1612198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:56:13.651840 1612198 out.go:235]   - Generating certificates and keys ...
	I0630 15:56:13.651943 1612198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:56:13.652047 1612198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:56:13.652179 1612198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:56:13.652262 1612198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:56:13.652381 1612198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:56:13.652486 1612198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:56:13.652658 1612198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:56:13.652726 1612198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:56:13.652788 1612198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:56:13.652876 1612198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:56:13.652930 1612198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:56:13.653009 1612198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:56:13.920791 1612198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:56:14.049695 1612198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:56:14.213882 1612198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:56:14.469969 1612198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:56:14.493927 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:56:14.496121 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:56:14.496179 1612198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:56:14.667471 1612198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:56:14.669824 1612198 out.go:235]   - Booting up control plane ...
	I0630 15:56:14.670005 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:56:14.673040 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:56:14.674211 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:56:14.675608 1612198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:56:14.680984 1612198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:56:54.682952 1612198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:56:54.683551 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:54.683769 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:59.684143 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:59.684406 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:57:09.685091 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:57:09.685374 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:57:29.686408 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:57:29.686681 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:58:09.688249 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:58:09.688537 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:58:09.688564 1612198 kubeadm.go:310] 
	I0630 15:58:09.688620 1612198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:58:09.688672 1612198 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:58:09.688681 1612198 kubeadm.go:310] 
	I0630 15:58:09.688721 1612198 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:58:09.688774 1612198 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:58:09.688912 1612198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:58:09.688921 1612198 kubeadm.go:310] 
	I0630 15:58:09.689114 1612198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:58:09.689178 1612198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:58:09.689250 1612198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:58:09.689265 1612198 kubeadm.go:310] 
	I0630 15:58:09.689442 1612198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:58:09.689568 1612198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:58:09.689580 1612198 kubeadm.go:310] 
	I0630 15:58:09.689730 1612198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:58:09.689812 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:58:09.689888 1612198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:58:09.689950 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:58:09.689957 1612198 kubeadm.go:310] 
	I0630 15:58:09.692282 1612198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:58:09.692363 1612198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:58:09.692431 1612198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0630 15:58:09.692497 1612198 kubeadm.go:394] duration metric: took 8m0.118278148s to StartCluster
	I0630 15:58:09.692554 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:58:09.692626 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:58:09.732128 1612198 cri.go:89] found id: ""
	I0630 15:58:09.732169 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.732178 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:58:09.732185 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:58:09.732247 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:58:09.764993 1612198 cri.go:89] found id: ""
	I0630 15:58:09.765024 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.765034 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:58:09.765042 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:58:09.765112 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:58:09.800767 1612198 cri.go:89] found id: ""
	I0630 15:58:09.800809 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.800820 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:58:09.800828 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:58:09.800888 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:58:09.834514 1612198 cri.go:89] found id: ""
	I0630 15:58:09.834544 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.834553 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:58:09.834560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:58:09.834636 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:58:09.867918 1612198 cri.go:89] found id: ""
	I0630 15:58:09.867946 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.867955 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:58:09.867962 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:58:09.868016 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:58:09.908166 1612198 cri.go:89] found id: ""
	I0630 15:58:09.908199 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.908208 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:58:09.908215 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:58:09.908275 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:58:09.941613 1612198 cri.go:89] found id: ""
	I0630 15:58:09.941649 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.941658 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:58:09.941665 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:58:09.941721 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:58:09.983579 1612198 cri.go:89] found id: ""
	I0630 15:58:09.983617 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.983626 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:58:09.983637 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:58:09.983652 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:58:10.041447 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:58:10.041506 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:58:10.055597 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:58:10.055633 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:58:10.125308 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:58:10.125345 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:58:10.125363 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:58:10.231871 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:58:10.231919 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0630 15:58:10.270513 1612198 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0630 15:58:10.270594 1612198 out.go:270] * 
	* 
	W0630 15:58:10.270682 1612198 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:58:10.270703 1612198 out.go:270] * 
	* 
	W0630 15:58:10.272423 1612198 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0630 15:58:10.276013 1612198 out.go:201] 
	W0630 15:58:10.277283 1612198 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:58:10.277328 1612198 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0630 15:58:10.277358 1612198 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0630 15:58:10.279010 1612198 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-836310 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 2 (245.746982ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-836310 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-668101 sudo crio                          | flannel-668101 | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| delete  | -p flannel-668101                                    | flannel-668101 | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo docker                         | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo find                           | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo crio                           | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p bridge-668101                                     | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 15:52:42
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 15:52:42.950710 1620744 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:52:42.950982 1620744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:52:42.950992 1620744 out.go:358] Setting ErrFile to fd 2...
	I0630 15:52:42.950997 1620744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:52:42.951256 1620744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:52:42.951919 1620744 out.go:352] Setting JSON to false
	I0630 15:52:42.953176 1620744 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":34455,"bootTime":1751264308,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:52:42.953303 1620744 start.go:140] virtualization: kvm guest
	I0630 15:52:42.956113 1620744 out.go:177] * [bridge-668101] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:52:42.957699 1620744 notify.go:220] Checking for updates...
	I0630 15:52:42.957717 1620744 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:52:42.959576 1620744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:52:42.961566 1620744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:52:42.963634 1620744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:52:42.965261 1620744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:52:42.966949 1620744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:52:42.968735 1620744 config.go:182] Loaded profile config "enable-default-cni-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:52:42.968869 1620744 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:52:42.968990 1620744 config.go:182] Loaded profile config "old-k8s-version-836310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0630 15:52:42.969114 1620744 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:52:43.011541 1620744 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 15:52:43.013118 1620744 start.go:304] selected driver: kvm2
	I0630 15:52:43.013145 1620744 start.go:908] validating driver "kvm2" against <nil>
	I0630 15:52:43.013160 1620744 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:52:43.014286 1620744 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:52:43.014403 1620744 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:52:43.032217 1620744 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:52:43.032283 1620744 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 15:52:43.032559 1620744 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:52:43.032604 1620744 cni.go:84] Creating CNI manager for "bridge"
	I0630 15:52:43.032615 1620744 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 15:52:43.032686 1620744 start.go:347] cluster config:
	{Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:bridge-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0630 15:52:43.032888 1620744 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:52:43.035138 1620744 out.go:177] * Starting "bridge-668101" primary control-plane node in "bridge-668101" cluster
	I0630 15:52:41.357269 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:41.358093 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find current IP address of domain flannel-668101 in network mk-flannel-668101
	I0630 15:52:41.358123 1619158 main.go:141] libmachine: (flannel-668101) DBG | I0630 15:52:41.358037 1619189 retry.go:31] will retry after 4.215568728s: waiting for domain to come up
	W0630 15:52:44.159824 1617293 pod_ready.go:104] pod "coredns-674b8bbfcf-6rphx" is not "Ready", error: <nil>
	I0630 15:52:44.656114 1617293 pod_ready.go:99] pod "coredns-674b8bbfcf-6rphx" in "kube-system" namespace is gone: getting pod "coredns-674b8bbfcf-6rphx" in "kube-system" namespace (will retry): pods "coredns-674b8bbfcf-6rphx" not found
	I0630 15:52:44.656143 1617293 pod_ready.go:86] duration metric: took 10.003645641s for pod "coredns-674b8bbfcf-6rphx" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.656159 1617293 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-v5d7m" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.660419 1617293 pod_ready.go:94] pod "coredns-674b8bbfcf-v5d7m" is "Ready"
	I0630 15:52:44.660451 1617293 pod_ready.go:86] duration metric: took 4.285712ms for pod "coredns-674b8bbfcf-v5d7m" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.662598 1617293 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.665846 1617293 pod_ready.go:94] pod "etcd-enable-default-cni-668101" is "Ready"
	I0630 15:52:44.665873 1617293 pod_ready.go:86] duration metric: took 3.248201ms for pod "etcd-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.667505 1617293 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.672030 1617293 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-668101" is "Ready"
	I0630 15:52:44.672060 1617293 pod_ready.go:86] duration metric: took 4.533989ms for pod "kube-apiserver-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.673855 1617293 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.057371 1617293 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-668101" is "Ready"
	I0630 15:52:45.057433 1617293 pod_ready.go:86] duration metric: took 383.556453ms for pod "kube-controller-manager-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.257321 1617293 pod_ready.go:83] waiting for pod "kube-proxy-gx8xr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.657721 1617293 pod_ready.go:94] pod "kube-proxy-gx8xr" is "Ready"
	I0630 15:52:45.657765 1617293 pod_ready.go:86] duration metric: took 400.308271ms for pod "kube-proxy-gx8xr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.857507 1617293 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:46.256921 1617293 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-668101" is "Ready"
	I0630 15:52:46.256953 1617293 pod_ready.go:86] duration metric: took 399.409105ms for pod "kube-scheduler-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:46.256970 1617293 pod_ready.go:40] duration metric: took 11.610545265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:52:46.306916 1617293 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:52:46.308982 1617293 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-668101" cluster and "default" namespace by default
	W0630 15:52:42.720632 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:42.720657 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:42.720672 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:42.805318 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:42.805369 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:45.356097 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:45.375177 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:45.375249 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:45.411531 1612198 cri.go:89] found id: ""
	I0630 15:52:45.411573 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.411585 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:45.411594 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:45.411670 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:45.446010 1612198 cri.go:89] found id: ""
	I0630 15:52:45.446040 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.446049 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:45.446055 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:45.446126 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:45.483165 1612198 cri.go:89] found id: ""
	I0630 15:52:45.483213 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.483225 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:45.483234 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:45.483309 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:45.519693 1612198 cri.go:89] found id: ""
	I0630 15:52:45.519724 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.519732 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:45.519739 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:45.519813 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:45.554863 1612198 cri.go:89] found id: ""
	I0630 15:52:45.554902 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.554913 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:45.554921 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:45.555000 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:45.590429 1612198 cri.go:89] found id: ""
	I0630 15:52:45.590460 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.590469 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:45.590476 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:45.590545 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:45.625876 1612198 cri.go:89] found id: ""
	I0630 15:52:45.625914 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.625927 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:45.625935 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:45.626002 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:45.663157 1612198 cri.go:89] found id: ""
	I0630 15:52:45.663188 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.663197 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:45.663210 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:45.663227 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:45.717765 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:45.717817 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:45.731782 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:45.731815 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:45.798057 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:45.798090 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:45.798106 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:45.878867 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:45.878917 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:43.036635 1620744 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:52:43.036694 1620744 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 15:52:43.036707 1620744 cache.go:56] Caching tarball of preloaded images
	I0630 15:52:43.036821 1620744 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:52:43.036837 1620744 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 15:52:43.036964 1620744 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/config.json ...
	I0630 15:52:43.036993 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/config.json: {Name:mke71cd9af919bb85465b3e686b56c4cd0e1c7fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:52:43.037185 1620744 start.go:360] acquireMachinesLock for bridge-668101: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:52:45.576190 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:45.576849 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find current IP address of domain flannel-668101 in network mk-flannel-668101
	I0630 15:52:45.576874 1619158 main.go:141] libmachine: (flannel-668101) DBG | I0630 15:52:45.576802 1619189 retry.go:31] will retry after 5.00816622s: waiting for domain to come up
	I0630 15:52:48.422047 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:48.441634 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:48.441712 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:48.482676 1612198 cri.go:89] found id: ""
	I0630 15:52:48.482706 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.482714 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:48.482721 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:48.482781 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:48.523604 1612198 cri.go:89] found id: ""
	I0630 15:52:48.523645 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.523659 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:48.523669 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:48.523740 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:48.566545 1612198 cri.go:89] found id: ""
	I0630 15:52:48.566576 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.566588 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:48.566595 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:48.566667 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:48.602166 1612198 cri.go:89] found id: ""
	I0630 15:52:48.602204 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.602219 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:48.602228 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:48.602296 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:48.645664 1612198 cri.go:89] found id: ""
	I0630 15:52:48.645701 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.645712 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:48.645724 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:48.645796 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:48.689364 1612198 cri.go:89] found id: ""
	I0630 15:52:48.689437 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.689449 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:48.689457 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:48.689532 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:48.727484 1612198 cri.go:89] found id: ""
	I0630 15:52:48.727594 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.727614 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:48.727623 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:48.727695 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:48.765617 1612198 cri.go:89] found id: ""
	I0630 15:52:48.765649 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.765662 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:48.765676 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:48.765696 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:48.832480 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:48.832525 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:48.851001 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:48.851033 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:48.935090 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:48.935117 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:48.935139 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:49.020511 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:49.020556 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:51.569582 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:51.586531 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:51.586608 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:51.623986 1612198 cri.go:89] found id: ""
	I0630 15:52:51.624022 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.624034 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:51.624041 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:51.624097 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:51.660234 1612198 cri.go:89] found id: ""
	I0630 15:52:51.660289 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.660311 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:51.660321 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:51.660396 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:51.694392 1612198 cri.go:89] found id: ""
	I0630 15:52:51.694421 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.694431 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:51.694439 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:51.694509 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:51.733636 1612198 cri.go:89] found id: ""
	I0630 15:52:51.733679 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.733692 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:51.733700 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:51.733767 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:51.770073 1612198 cri.go:89] found id: ""
	I0630 15:52:51.770105 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.770116 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:51.770125 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:51.770193 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:51.806054 1612198 cri.go:89] found id: ""
	I0630 15:52:51.806082 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.806096 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:51.806105 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:51.806166 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:51.844220 1612198 cri.go:89] found id: ""
	I0630 15:52:51.844253 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.844263 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:51.844270 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:51.844337 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:51.879139 1612198 cri.go:89] found id: ""
	I0630 15:52:51.879180 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.879192 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:51.879206 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:51.879225 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:51.959131 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:51.959178 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:51.999852 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:51.999898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:52.054538 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:52.054586 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:52.068544 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:52.068582 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:52.141184 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:50.586392 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:50.586877 1619158 main.go:141] libmachine: (flannel-668101) found domain IP: 192.168.50.164
	I0630 15:52:50.586929 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has current primary IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:50.586951 1619158 main.go:141] libmachine: (flannel-668101) reserving static IP address...
	I0630 15:52:50.587266 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find host DHCP lease matching {name: "flannel-668101", mac: "52:54:00:d0:56:26", ip: "192.168.50.164"} in network mk-flannel-668101
	I0630 15:52:50.692673 1619158 main.go:141] libmachine: (flannel-668101) DBG | Getting to WaitForSSH function...
	I0630 15:52:50.692714 1619158 main.go:141] libmachine: (flannel-668101) reserved static IP address 192.168.50.164 for domain flannel-668101
	I0630 15:52:50.692729 1619158 main.go:141] libmachine: (flannel-668101) waiting for SSH...
	I0630 15:52:50.695660 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:50.696050 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101
	I0630 15:52:50.696074 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find defined IP address of network mk-flannel-668101 interface with MAC address 52:54:00:d0:56:26
	I0630 15:52:50.696281 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH client type: external
	I0630 15:52:50.696306 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa (-rw-------)
	I0630 15:52:50.696335 1619158 main.go:141] libmachine: (flannel-668101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:52:50.696364 1619158 main.go:141] libmachine: (flannel-668101) DBG | About to run SSH command:
	I0630 15:52:50.696404 1619158 main.go:141] libmachine: (flannel-668101) DBG | exit 0
	I0630 15:52:50.701524 1619158 main.go:141] libmachine: (flannel-668101) DBG | SSH cmd err, output: exit status 255: 
	I0630 15:52:50.701550 1619158 main.go:141] libmachine: (flannel-668101) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0630 15:52:50.701561 1619158 main.go:141] libmachine: (flannel-668101) DBG | command : exit 0
	I0630 15:52:50.701568 1619158 main.go:141] libmachine: (flannel-668101) DBG | err     : exit status 255
	I0630 15:52:50.701579 1619158 main.go:141] libmachine: (flannel-668101) DBG | output  : 
	I0630 15:52:53.701789 1619158 main.go:141] libmachine: (flannel-668101) DBG | Getting to WaitForSSH function...
	I0630 15:52:53.704360 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.704932 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:53.704962 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.705130 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH client type: external
	I0630 15:52:53.705161 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa (-rw-------)
	I0630 15:52:53.705186 1619158 main.go:141] libmachine: (flannel-668101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:52:53.705196 1619158 main.go:141] libmachine: (flannel-668101) DBG | About to run SSH command:
	I0630 15:52:53.705216 1619158 main.go:141] libmachine: (flannel-668101) DBG | exit 0
	I0630 15:52:53.830137 1619158 main.go:141] libmachine: (flannel-668101) DBG | SSH cmd err, output: <nil>: 
	I0630 15:52:53.830489 1619158 main.go:141] libmachine: (flannel-668101) KVM machine creation complete
	I0630 15:52:53.831158 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetConfigRaw
	I0630 15:52:53.831811 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:53.832305 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:53.832539 1619158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:52:53.832558 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:52:53.834243 1619158 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:52:53.834258 1619158 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:52:53.834264 1619158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:52:53.834269 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:53.837692 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.838098 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:53.838132 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.838367 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:53.838567 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.838712 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.838827 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:53.838973 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:53.839228 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:53.839240 1619158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:52:53.941129 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:52:53.941166 1619158 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:52:53.941179 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:53.945852 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.946724 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:53.946789 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.947156 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:53.947488 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.947724 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.947876 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:53.948105 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:53.948402 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:53.948418 1619158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:52:54.054669 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:52:54.054748 1619158 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:52:54.054758 1619158 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:52:54.054767 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetMachineName
	I0630 15:52:54.055102 1619158 buildroot.go:166] provisioning hostname "flannel-668101"
	I0630 15:52:54.055132 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetMachineName
	I0630 15:52:54.055454 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:54.059064 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.059471 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.059502 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.059708 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:54.059899 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.060070 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.060224 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:54.060393 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:54.060624 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:54.060640 1619158 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-668101 && echo "flannel-668101" | sudo tee /etc/hostname
	I0630 15:52:54.177979 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-668101
	
	I0630 15:52:54.178018 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:54.181025 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.181363 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.181395 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.181596 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:54.181838 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.182126 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.182320 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:54.182493 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:54.182708 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:54.182725 1619158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-668101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-668101/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-668101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:52:54.297007 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:52:54.297044 1619158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:52:54.297108 1619158 buildroot.go:174] setting up certificates
	I0630 15:52:54.297155 1619158 provision.go:84] configureAuth start
	I0630 15:52:54.297174 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetMachineName
	I0630 15:52:54.297629 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:54.300624 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.300972 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.301001 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.301156 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:54.303586 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.303998 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.304030 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.304173 1619158 provision.go:143] copyHostCerts
	I0630 15:52:54.304256 1619158 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:52:54.304278 1619158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:52:54.304353 1619158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:52:54.304508 1619158 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:52:54.304518 1619158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:52:54.304545 1619158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:52:54.304611 1619158 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:52:54.304618 1619158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:52:54.304640 1619158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:52:54.304715 1619158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.flannel-668101 san=[127.0.0.1 192.168.50.164 flannel-668101 localhost minikube]
	I0630 15:52:55.093359 1619158 provision.go:177] copyRemoteCerts
	I0630 15:52:55.093451 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:52:55.093490 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.096608 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.097063 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.097100 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.097382 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.097605 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.097804 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.097967 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.181657 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:52:55.212265 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0630 15:52:55.244844 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:52:55.279323 1619158 provision.go:87] duration metric: took 982.144024ms to configureAuth
	I0630 15:52:55.279365 1619158 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:52:55.279616 1619158 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:52:55.279709 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.283643 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.284181 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.284211 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.284404 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.284627 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.284847 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.285000 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.285212 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:55.285583 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:55.285612 1619158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:52:55.778589 1620744 start.go:364] duration metric: took 12.741358919s to acquireMachinesLock for "bridge-668101"
	I0630 15:52:55.778680 1620744 start.go:93] Provisioning new machine with config: &{Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:bridge-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:52:55.778835 1620744 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 15:52:55.530045 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:52:55.530104 1619158 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:52:55.530116 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetURL
	I0630 15:52:55.531952 1619158 main.go:141] libmachine: (flannel-668101) DBG | using libvirt version 6000000
	I0630 15:52:55.534427 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.534823 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.534843 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.535146 1619158 main.go:141] libmachine: Docker is up and running!
	I0630 15:52:55.535159 1619158 main.go:141] libmachine: Reticulating splines...
	I0630 15:52:55.535167 1619158 client.go:171] duration metric: took 30.008807578s to LocalClient.Create
	I0630 15:52:55.535196 1619158 start.go:167] duration metric: took 30.008887821s to libmachine.API.Create "flannel-668101"
	I0630 15:52:55.535211 1619158 start.go:293] postStartSetup for "flannel-668101" (driver="kvm2")
	I0630 15:52:55.535279 1619158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:52:55.535323 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.535615 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:52:55.535648 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.538056 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.538461 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.538505 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.538621 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.538865 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.539071 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.539281 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.621263 1619158 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:52:55.626036 1619158 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:52:55.626073 1619158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:52:55.626186 1619158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:52:55.626347 1619158 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:52:55.626445 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:52:55.637649 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:52:55.667310 1619158 start.go:296] duration metric: took 132.08213ms for postStartSetup
	I0630 15:52:55.667372 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetConfigRaw
	I0630 15:52:55.668073 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:55.671293 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.671868 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.671903 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.672201 1619158 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/config.json ...
	I0630 15:52:55.672423 1619158 start.go:128] duration metric: took 30.167785685s to createHost
	I0630 15:52:55.672451 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.674800 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.675142 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.675174 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.675451 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.675643 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.675788 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.676031 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.676253 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:55.676551 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:55.676567 1619158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:52:55.778402 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298775.758912603
	
	I0630 15:52:55.778427 1619158 fix.go:216] guest clock: 1751298775.758912603
	I0630 15:52:55.778435 1619158 fix.go:229] Guest: 2025-06-30 15:52:55.758912603 +0000 UTC Remote: 2025-06-30 15:52:55.67243923 +0000 UTC m=+30.329704815 (delta=86.473373ms)
	I0630 15:52:55.778459 1619158 fix.go:200] guest clock delta is within tolerance: 86.473373ms
	I0630 15:52:55.778466 1619158 start.go:83] releasing machines lock for "flannel-668101", held for 30.273912922s
	I0630 15:52:55.778518 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.778846 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:55.782021 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.782499 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.782533 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.782737 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.783225 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.783481 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.783595 1619158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:52:55.783641 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.783703 1619158 ssh_runner.go:195] Run: cat /version.json
	I0630 15:52:55.783731 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.786539 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.786668 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.786964 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.786995 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.787022 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.787034 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.787195 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.787318 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.787429 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.787516 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.787627 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.787712 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.787790 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.787848 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.874997 1619158 ssh_runner.go:195] Run: systemctl --version
	I0630 15:52:55.904909 1619158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:52:56.070066 1619158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:52:56.076773 1619158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:52:56.076855 1619158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:52:56.096159 1619158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:52:56.096192 1619158 start.go:495] detecting cgroup driver to use...
	I0630 15:52:56.096267 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:52:56.116203 1619158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:52:56.134008 1619158 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:52:56.134070 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:52:56.150561 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:52:56.166862 1619158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:52:56.306622 1619158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:52:56.473344 1619158 docker.go:246] disabling docker service ...
	I0630 15:52:56.473467 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:52:56.490252 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:52:56.505665 1619158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:52:56.705455 1619158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:52:56.856676 1619158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:52:56.873735 1619158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:52:56.897728 1619158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:52:56.897807 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.909980 1619158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:52:56.910087 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.921206 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.932511 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.943614 1619158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:52:56.956362 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.968071 1619158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.987887 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.999240 1619158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:52:57.009535 1619158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:52:57.009612 1619158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:52:57.024825 1619158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:52:57.035690 1619158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:52:57.175638 1619158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:52:57.278362 1619158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:52:57.278504 1619158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:52:57.285443 1619158 start.go:563] Will wait 60s for crictl version
	I0630 15:52:57.285511 1619158 ssh_runner.go:195] Run: which crictl
	I0630 15:52:57.289297 1619158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:52:57.341170 1619158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:52:57.341278 1619158 ssh_runner.go:195] Run: crio --version
	I0630 15:52:57.370996 1619158 ssh_runner.go:195] Run: crio --version
	I0630 15:52:57.408719 1619158 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:52:54.642061 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:54.657561 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:54.657631 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:54.699127 1612198 cri.go:89] found id: ""
	I0630 15:52:54.699156 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.699165 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:54.699172 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:54.699249 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:54.743537 1612198 cri.go:89] found id: ""
	I0630 15:52:54.743582 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.743595 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:54.743604 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:54.743691 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:54.793655 1612198 cri.go:89] found id: ""
	I0630 15:52:54.793692 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.793705 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:54.793714 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:54.793789 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:54.836404 1612198 cri.go:89] found id: ""
	I0630 15:52:54.836439 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.836450 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:54.836458 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:54.836530 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:54.881834 1612198 cri.go:89] found id: ""
	I0630 15:52:54.881866 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.881874 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:54.881881 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:54.881945 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:54.920907 1612198 cri.go:89] found id: ""
	I0630 15:52:54.920937 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.920945 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:54.920952 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:54.921019 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:54.964724 1612198 cri.go:89] found id: ""
	I0630 15:52:54.964777 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.964790 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:54.964799 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:54.964877 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:55.000611 1612198 cri.go:89] found id: ""
	I0630 15:52:55.000646 1612198 logs.go:282] 0 containers: []
	W0630 15:52:55.000654 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:55.000665 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:55.000678 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:55.075252 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:55.075285 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:55.075306 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:55.162081 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:55.162133 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:55.226240 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:55.226277 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:55.297365 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:55.297429 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:55.781091 1620744 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0630 15:52:55.781346 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:52:55.781446 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:52:55.799943 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38375
	I0630 15:52:55.800489 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:52:55.801103 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:52:55.801134 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:52:55.801483 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:52:55.801678 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:52:55.801826 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:52:55.802012 1620744 start.go:159] libmachine.API.Create for "bridge-668101" (driver="kvm2")
	I0630 15:52:55.802045 1620744 client.go:168] LocalClient.Create starting
	I0630 15:52:55.802082 1620744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 15:52:55.802123 1620744 main.go:141] libmachine: Decoding PEM data...
	I0630 15:52:55.802145 1620744 main.go:141] libmachine: Parsing certificate...
	I0630 15:52:55.802228 1620744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 15:52:55.802259 1620744 main.go:141] libmachine: Decoding PEM data...
	I0630 15:52:55.802275 1620744 main.go:141] libmachine: Parsing certificate...
	I0630 15:52:55.802328 1620744 main.go:141] libmachine: Running pre-create checks...
	I0630 15:52:55.802341 1620744 main.go:141] libmachine: (bridge-668101) Calling .PreCreateCheck
	I0630 15:52:55.802728 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetConfigRaw
	I0630 15:52:55.803114 1620744 main.go:141] libmachine: Creating machine...
	I0630 15:52:55.803131 1620744 main.go:141] libmachine: (bridge-668101) Calling .Create
	I0630 15:52:55.803562 1620744 main.go:141] libmachine: (bridge-668101) creating KVM machine...
	I0630 15:52:55.803587 1620744 main.go:141] libmachine: (bridge-668101) creating network...
	I0630 15:52:55.805278 1620744 main.go:141] libmachine: (bridge-668101) DBG | found existing default KVM network
	I0630 15:52:55.806568 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.806371 1620899 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2c:4b:58} reservation:<nil>}
	I0630 15:52:55.807384 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.807300 1620899 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:46:29:de} reservation:<nil>}
	I0630 15:52:55.808183 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.808055 1620899 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:da:d8:99} reservation:<nil>}
	I0630 15:52:55.809357 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.809236 1620899 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002cac60}
	I0630 15:52:55.809380 1620744 main.go:141] libmachine: (bridge-668101) DBG | created network xml: 
	I0630 15:52:55.809386 1620744 main.go:141] libmachine: (bridge-668101) DBG | <network>
	I0630 15:52:55.809392 1620744 main.go:141] libmachine: (bridge-668101) DBG |   <name>mk-bridge-668101</name>
	I0630 15:52:55.809397 1620744 main.go:141] libmachine: (bridge-668101) DBG |   <dns enable='no'/>
	I0630 15:52:55.809425 1620744 main.go:141] libmachine: (bridge-668101) DBG |   
	I0630 15:52:55.809435 1620744 main.go:141] libmachine: (bridge-668101) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0630 15:52:55.809443 1620744 main.go:141] libmachine: (bridge-668101) DBG |     <dhcp>
	I0630 15:52:55.809449 1620744 main.go:141] libmachine: (bridge-668101) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0630 15:52:55.809456 1620744 main.go:141] libmachine: (bridge-668101) DBG |     </dhcp>
	I0630 15:52:55.809476 1620744 main.go:141] libmachine: (bridge-668101) DBG |   </ip>
	I0630 15:52:55.809495 1620744 main.go:141] libmachine: (bridge-668101) DBG |   
	I0630 15:52:55.809501 1620744 main.go:141] libmachine: (bridge-668101) DBG | </network>
	I0630 15:52:55.809510 1620744 main.go:141] libmachine: (bridge-668101) DBG | 
	I0630 15:52:55.815963 1620744 main.go:141] libmachine: (bridge-668101) DBG | trying to create private KVM network mk-bridge-668101 192.168.72.0/24...
	I0630 15:52:55.898159 1620744 main.go:141] libmachine: (bridge-668101) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101 ...
	I0630 15:52:55.898202 1620744 main.go:141] libmachine: (bridge-668101) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 15:52:55.898214 1620744 main.go:141] libmachine: (bridge-668101) DBG | private KVM network mk-bridge-668101 192.168.72.0/24 created
	I0630 15:52:55.898234 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.898059 1620899 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:52:55.898373 1620744 main.go:141] libmachine: (bridge-668101) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 15:52:56.221476 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:56.221233 1620899 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa...
	I0630 15:52:56.640944 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:56.640745 1620899 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/bridge-668101.rawdisk...
	I0630 15:52:56.640998 1620744 main.go:141] libmachine: (bridge-668101) DBG | Writing magic tar header
	I0630 15:52:56.641019 1620744 main.go:141] libmachine: (bridge-668101) DBG | Writing SSH key tar header
	I0630 15:52:56.641031 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:56.640908 1620899 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101 ...
	I0630 15:52:56.641054 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101
	I0630 15:52:56.641093 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101 (perms=drwx------)
	I0630 15:52:56.641214 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 15:52:56.641244 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:52:56.641260 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 15:52:56.641272 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 15:52:56.641286 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 15:52:56.641298 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins
	I0630 15:52:56.641308 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home
	I0630 15:52:56.641320 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 15:52:56.641331 1620744 main.go:141] libmachine: (bridge-668101) DBG | skipping /home - not owner
	I0630 15:52:56.641357 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 15:52:56.641377 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 15:52:56.641386 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 15:52:56.641397 1620744 main.go:141] libmachine: (bridge-668101) creating domain...
	I0630 15:52:56.642571 1620744 main.go:141] libmachine: (bridge-668101) define libvirt domain using xml: 
	I0630 15:52:56.642602 1620744 main.go:141] libmachine: (bridge-668101) <domain type='kvm'>
	I0630 15:52:56.642633 1620744 main.go:141] libmachine: (bridge-668101)   <name>bridge-668101</name>
	I0630 15:52:56.642652 1620744 main.go:141] libmachine: (bridge-668101)   <memory unit='MiB'>3072</memory>
	I0630 15:52:56.642667 1620744 main.go:141] libmachine: (bridge-668101)   <vcpu>2</vcpu>
	I0630 15:52:56.642691 1620744 main.go:141] libmachine: (bridge-668101)   <features>
	I0630 15:52:56.642705 1620744 main.go:141] libmachine: (bridge-668101)     <acpi/>
	I0630 15:52:56.642713 1620744 main.go:141] libmachine: (bridge-668101)     <apic/>
	I0630 15:52:56.642725 1620744 main.go:141] libmachine: (bridge-668101)     <pae/>
	I0630 15:52:56.642745 1620744 main.go:141] libmachine: (bridge-668101)     
	I0630 15:52:56.642783 1620744 main.go:141] libmachine: (bridge-668101)   </features>
	I0630 15:52:56.642806 1620744 main.go:141] libmachine: (bridge-668101)   <cpu mode='host-passthrough'>
	I0630 15:52:56.642838 1620744 main.go:141] libmachine: (bridge-668101)   
	I0630 15:52:56.642863 1620744 main.go:141] libmachine: (bridge-668101)   </cpu>
	I0630 15:52:56.642880 1620744 main.go:141] libmachine: (bridge-668101)   <os>
	I0630 15:52:56.642900 1620744 main.go:141] libmachine: (bridge-668101)     <type>hvm</type>
	I0630 15:52:56.642914 1620744 main.go:141] libmachine: (bridge-668101)     <boot dev='cdrom'/>
	I0630 15:52:56.642925 1620744 main.go:141] libmachine: (bridge-668101)     <boot dev='hd'/>
	I0630 15:52:56.642944 1620744 main.go:141] libmachine: (bridge-668101)     <bootmenu enable='no'/>
	I0630 15:52:56.642956 1620744 main.go:141] libmachine: (bridge-668101)   </os>
	I0630 15:52:56.642969 1620744 main.go:141] libmachine: (bridge-668101)   <devices>
	I0630 15:52:56.642980 1620744 main.go:141] libmachine: (bridge-668101)     <disk type='file' device='cdrom'>
	I0630 15:52:56.642999 1620744 main.go:141] libmachine: (bridge-668101)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/boot2docker.iso'/>
	I0630 15:52:56.643011 1620744 main.go:141] libmachine: (bridge-668101)       <target dev='hdc' bus='scsi'/>
	I0630 15:52:56.643025 1620744 main.go:141] libmachine: (bridge-668101)       <readonly/>
	I0630 15:52:56.643041 1620744 main.go:141] libmachine: (bridge-668101)     </disk>
	I0630 15:52:56.643059 1620744 main.go:141] libmachine: (bridge-668101)     <disk type='file' device='disk'>
	I0630 15:52:56.643073 1620744 main.go:141] libmachine: (bridge-668101)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 15:52:56.643102 1620744 main.go:141] libmachine: (bridge-668101)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/bridge-668101.rawdisk'/>
	I0630 15:52:56.643114 1620744 main.go:141] libmachine: (bridge-668101)       <target dev='hda' bus='virtio'/>
	I0630 15:52:56.643122 1620744 main.go:141] libmachine: (bridge-668101)     </disk>
	I0630 15:52:56.643135 1620744 main.go:141] libmachine: (bridge-668101)     <interface type='network'>
	I0630 15:52:56.643147 1620744 main.go:141] libmachine: (bridge-668101)       <source network='mk-bridge-668101'/>
	I0630 15:52:56.643170 1620744 main.go:141] libmachine: (bridge-668101)       <model type='virtio'/>
	I0630 15:52:56.643189 1620744 main.go:141] libmachine: (bridge-668101)     </interface>
	I0630 15:52:56.643202 1620744 main.go:141] libmachine: (bridge-668101)     <interface type='network'>
	I0630 15:52:56.643213 1620744 main.go:141] libmachine: (bridge-668101)       <source network='default'/>
	I0630 15:52:56.643225 1620744 main.go:141] libmachine: (bridge-668101)       <model type='virtio'/>
	I0630 15:52:56.643235 1620744 main.go:141] libmachine: (bridge-668101)     </interface>
	I0630 15:52:56.643244 1620744 main.go:141] libmachine: (bridge-668101)     <serial type='pty'>
	I0630 15:52:56.643254 1620744 main.go:141] libmachine: (bridge-668101)       <target port='0'/>
	I0630 15:52:56.643269 1620744 main.go:141] libmachine: (bridge-668101)     </serial>
	I0630 15:52:56.643284 1620744 main.go:141] libmachine: (bridge-668101)     <console type='pty'>
	I0630 15:52:56.643297 1620744 main.go:141] libmachine: (bridge-668101)       <target type='serial' port='0'/>
	I0630 15:52:56.643307 1620744 main.go:141] libmachine: (bridge-668101)     </console>
	I0630 15:52:56.643318 1620744 main.go:141] libmachine: (bridge-668101)     <rng model='virtio'>
	I0630 15:52:56.643330 1620744 main.go:141] libmachine: (bridge-668101)       <backend model='random'>/dev/random</backend>
	I0630 15:52:56.643341 1620744 main.go:141] libmachine: (bridge-668101)     </rng>
	I0630 15:52:56.643348 1620744 main.go:141] libmachine: (bridge-668101)     
	I0630 15:52:56.643370 1620744 main.go:141] libmachine: (bridge-668101)     
	I0630 15:52:56.643393 1620744 main.go:141] libmachine: (bridge-668101)   </devices>
	I0630 15:52:56.643405 1620744 main.go:141] libmachine: (bridge-668101) </domain>
	I0630 15:52:56.643415 1620744 main.go:141] libmachine: (bridge-668101) 
	I0630 15:52:56.648384 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:c9:a1:4d in network default
	I0630 15:52:56.649121 1620744 main.go:141] libmachine: (bridge-668101) starting domain...
	I0630 15:52:56.649143 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:56.649148 1620744 main.go:141] libmachine: (bridge-668101) ensuring networks are active...
	I0630 15:52:56.649950 1620744 main.go:141] libmachine: (bridge-668101) Ensuring network default is active
	I0630 15:52:56.650256 1620744 main.go:141] libmachine: (bridge-668101) Ensuring network mk-bridge-668101 is active
	I0630 15:52:56.650853 1620744 main.go:141] libmachine: (bridge-668101) getting domain XML...
	I0630 15:52:56.651713 1620744 main.go:141] libmachine: (bridge-668101) creating domain...
	I0630 15:52:57.410163 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:57.414146 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:57.414618 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:57.414653 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:57.414941 1619158 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0630 15:52:57.419663 1619158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:52:57.434011 1619158 kubeadm.go:875] updating cluster {Name:flannel-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:flannel-6
68101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:52:57.434146 1619158 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:52:57.434191 1619158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:52:57.470291 1619158 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 15:52:57.470364 1619158 ssh_runner.go:195] Run: which lz4
	I0630 15:52:57.475237 1619158 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:52:57.480568 1619158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:52:57.480607 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 15:52:59.283095 1619158 crio.go:462] duration metric: took 1.807899896s to copy over tarball
	I0630 15:52:59.283202 1619158 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:52:57.821154 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:57.853607 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:57.853696 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:57.914164 1612198 cri.go:89] found id: ""
	I0630 15:52:57.914210 1612198 logs.go:282] 0 containers: []
	W0630 15:52:57.914227 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:57.914246 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:57.914347 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:57.987318 1612198 cri.go:89] found id: ""
	I0630 15:52:57.987351 1612198 logs.go:282] 0 containers: []
	W0630 15:52:57.987366 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:57.987377 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:57.987457 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:58.079419 1612198 cri.go:89] found id: ""
	I0630 15:52:58.079447 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.079455 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:58.079462 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:58.079527 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:58.159322 1612198 cri.go:89] found id: ""
	I0630 15:52:58.159364 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.159376 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:58.159385 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:58.159456 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:58.214549 1612198 cri.go:89] found id: ""
	I0630 15:52:58.214589 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.214605 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:58.214614 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:58.214688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:58.268709 1612198 cri.go:89] found id: ""
	I0630 15:52:58.268743 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.268755 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:58.268764 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:58.268865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:58.336282 1612198 cri.go:89] found id: ""
	I0630 15:52:58.336316 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.336327 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:58.336335 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:58.336411 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:58.385539 1612198 cri.go:89] found id: ""
	I0630 15:52:58.385568 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.385577 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:58.385587 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:58.385600 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:58.490925 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:58.490953 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:58.490966 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:58.595534 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:58.595636 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:58.670912 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:58.670947 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:58.746686 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:58.746777 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:01.264137 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:01.286226 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:01.286330 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:01.365280 1612198 cri.go:89] found id: ""
	I0630 15:53:01.365314 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.365328 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:01.365336 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:01.365446 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:01.416551 1612198 cri.go:89] found id: ""
	I0630 15:53:01.416609 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.416628 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:01.416639 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:01.416760 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:01.466901 1612198 cri.go:89] found id: ""
	I0630 15:53:01.466951 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.466968 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:01.466992 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:01.467076 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:01.515958 1612198 cri.go:89] found id: ""
	I0630 15:53:01.516004 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.516018 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:01.516026 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:01.516100 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:01.556162 1612198 cri.go:89] found id: ""
	I0630 15:53:01.556199 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.556212 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:01.556220 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:01.556294 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:01.596633 1612198 cri.go:89] found id: ""
	I0630 15:53:01.596668 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.596681 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:01.596701 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:01.596767 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:01.643515 1612198 cri.go:89] found id: ""
	I0630 15:53:01.643544 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.643553 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:01.643560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:01.643623 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:01.688673 1612198 cri.go:89] found id: ""
	I0630 15:53:01.688716 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.688730 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:01.688746 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:01.688763 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:01.732854 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:01.732887 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:01.792838 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:01.792898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:01.809743 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:01.809803 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:01.893975 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:01.894006 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:01.894020 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:58.300955 1620744 main.go:141] libmachine: (bridge-668101) waiting for IP...
	I0630 15:52:58.302501 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:58.303671 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:58.303696 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:58.303566 1620899 retry.go:31] will retry after 218.695917ms: waiting for domain to come up
	I0630 15:52:58.524255 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:58.525158 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:58.525190 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:58.525070 1620899 retry.go:31] will retry after 355.788445ms: waiting for domain to come up
	I0630 15:52:58.882797 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:58.883330 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:58.883352 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:58.883258 1620899 retry.go:31] will retry after 433.916696ms: waiting for domain to come up
	I0630 15:52:59.319443 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:59.320277 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:59.320312 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:59.320255 1620899 retry.go:31] will retry after 591.607748ms: waiting for domain to come up
	I0630 15:52:59.914140 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:59.914771 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:59.914833 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:59.914762 1620899 retry.go:31] will retry after 653.936151ms: waiting for domain to come up
	I0630 15:53:00.571061 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:00.571855 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:00.571885 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:00.571800 1620899 retry.go:31] will retry after 843.188018ms: waiting for domain to come up
	I0630 15:53:01.416477 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:01.417384 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:01.417447 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:01.417320 1620899 retry.go:31] will retry after 766.048685ms: waiting for domain to come up
	I0630 15:53:02.185256 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:02.185660 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:02.185690 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:02.185641 1620899 retry.go:31] will retry after 1.410798952s: waiting for domain to come up
	I0630 15:53:01.524921 1619158 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241677784s)
	I0630 15:53:01.524971 1619158 crio.go:469] duration metric: took 2.241824009s to extract the tarball
	I0630 15:53:01.524981 1619158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:53:01.580282 1619158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:53:01.626979 1619158 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:53:01.627012 1619158 cache_images.go:84] Images are preloaded, skipping loading
	I0630 15:53:01.627022 1619158 kubeadm.go:926] updating node { 192.168.50.164 8443 v1.33.2 crio true true} ...
	I0630 15:53:01.627165 1619158 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-668101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:flannel-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0630 15:53:01.627252 1619158 ssh_runner.go:195] Run: crio config
	I0630 15:53:01.702008 1619158 cni.go:84] Creating CNI manager for "flannel"
	I0630 15:53:01.702063 1619158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:53:01.702098 1619158 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-668101 NodeName:flannel-668101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:53:01.702303 1619158 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-668101"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.164"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:53:01.702411 1619158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 15:53:01.715795 1619158 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:53:01.715889 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:53:01.729847 1619158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0630 15:53:01.752217 1619158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:53:01.775084 1619158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0630 15:53:01.796311 1619158 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I0630 15:53:01.801900 1619158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:53:01.819789 1619158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:01.986382 1619158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:02.019955 1619158 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101 for IP: 192.168.50.164
	I0630 15:53:02.019984 1619158 certs.go:194] generating shared ca certs ...
	I0630 15:53:02.020008 1619158 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.020252 1619158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:53:02.020336 1619158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:53:02.020356 1619158 certs.go:256] generating profile certs ...
	I0630 15:53:02.020447 1619158 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.key
	I0630 15:53:02.020471 1619158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt with IP's: []
	I0630 15:53:02.580979 1619158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt ...
	I0630 15:53:02.581014 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: {Name:mk57dc79d0a2f5ced3dc3dbf5df60db658cd128d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.581193 1619158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.key ...
	I0630 15:53:02.581204 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.key: {Name:mkc12787b7a2e7f85b5efc0fe2ad3bd4bb3a36c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.581279 1619158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315
	I0630 15:53:02.581294 1619158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.164]
	I0630 15:53:02.891830 1619158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315 ...
	I0630 15:53:02.891864 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315: {Name:mk4a3b251c65c4f6336605ebde0fd2b6394224cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.892035 1619158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315 ...
	I0630 15:53:02.892047 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315: {Name:mkfdd1175258bc2f41de0b5ea2ff2aa4d2ba1824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.892138 1619158 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt
	I0630 15:53:02.892212 1619158 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key
	I0630 15:53:02.892263 1619158 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key
	I0630 15:53:02.892288 1619158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt with IP's: []
	I0630 15:53:03.110294 1619158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt ...
	I0630 15:53:03.110338 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt: {Name:mk5f2a1c5ffd32a7751cdaa24de023db01340134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:03.110558 1619158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key ...
	I0630 15:53:03.110576 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key: {Name:mk75d7060f89bcef318a4de6ba9f3f077d54a76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:03.110779 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:53:03.110831 1619158 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:53:03.110847 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:53:03.110885 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:53:03.110918 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:53:03.110952 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:53:03.111006 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:53:03.111669 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:53:03.143651 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:53:03.173382 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:53:03.207609 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:53:03.239807 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 15:53:03.271613 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 15:53:03.304865 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:53:03.336277 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 15:53:03.367070 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:53:03.399740 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:53:03.431108 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:53:03.469922 1619158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:53:03.496991 1619158 ssh_runner.go:195] Run: openssl version
	I0630 15:53:03.503713 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:53:03.519935 1619158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:53:03.525171 1619158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:53:03.525235 1619158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:53:03.533074 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:53:03.546306 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:53:03.560844 1619158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:03.566199 1619158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:03.566277 1619158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:03.573685 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:53:03.589057 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:53:03.614844 1619158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:53:03.621765 1619158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:53:03.621846 1619158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:53:03.631593 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:53:03.649952 1619158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:53:03.656577 1619158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:53:03.656636 1619158 kubeadm.go:392] StartCluster: {Name:flannel-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:flannel-6681
01 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:53:03.656726 1619158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:53:03.656792 1619158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:53:03.706253 1619158 cri.go:89] found id: ""
	I0630 15:53:03.706351 1619158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:53:03.718137 1619158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:53:03.730377 1619158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:53:03.745839 1619158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:53:03.745864 1619158 kubeadm.go:157] found existing configuration files:
	
	I0630 15:53:03.745922 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:53:03.757621 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:53:03.757687 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:53:03.771916 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:53:03.784628 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:53:03.784695 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:53:03.798159 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:53:03.809990 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:53:03.810067 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:53:03.822466 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:53:03.834020 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:53:03.834138 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:53:03.845749 1619158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:53:04.003225 1619158 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:53:04.474834 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:04.495812 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:04.495894 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:04.545620 1612198 cri.go:89] found id: ""
	I0630 15:53:04.545652 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.545664 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:04.545674 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:04.545819 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:04.595168 1612198 cri.go:89] found id: ""
	I0630 15:53:04.595303 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.595325 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:04.595339 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:04.595423 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:04.648158 1612198 cri.go:89] found id: ""
	I0630 15:53:04.648189 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.648201 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:04.648210 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:04.648279 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:04.695407 1612198 cri.go:89] found id: ""
	I0630 15:53:04.695441 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.695452 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:04.695460 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:04.695525 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:04.745024 1612198 cri.go:89] found id: ""
	I0630 15:53:04.745059 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.745072 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:04.745079 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:04.745147 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:04.784238 1612198 cri.go:89] found id: ""
	I0630 15:53:04.784278 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.784291 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:04.784301 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:04.784375 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:04.828921 1612198 cri.go:89] found id: ""
	I0630 15:53:04.828962 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.828976 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:04.828986 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:04.829058 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:04.878950 1612198 cri.go:89] found id: ""
	I0630 15:53:04.878980 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.878992 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:04.879004 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:04.879021 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:04.898852 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:04.898883 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:04.994919 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:04.994955 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:04.994971 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:05.081838 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:05.081891 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:05.134599 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:05.134639 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:03.598543 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:03.599016 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:03.599041 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:03.599011 1620899 retry.go:31] will retry after 1.276009124s: waiting for domain to come up
	I0630 15:53:04.876532 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:04.877133 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:04.877161 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:04.877082 1620899 retry.go:31] will retry after 1.605247273s: waiting for domain to come up
	I0630 15:53:06.483950 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:06.484698 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:06.484730 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:06.484666 1620899 retry.go:31] will retry after 2.436119373s: waiting for domain to come up
	I0630 15:53:07.707840 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:07.724492 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:07.724584 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:07.764489 1612198 cri.go:89] found id: ""
	I0630 15:53:07.764533 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.764545 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:07.764553 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:07.764641 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:07.813734 1612198 cri.go:89] found id: ""
	I0630 15:53:07.813762 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.813771 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:07.813777 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:07.813838 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:07.866385 1612198 cri.go:89] found id: ""
	I0630 15:53:07.866412 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.866420 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:07.866426 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:07.866480 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:07.913274 1612198 cri.go:89] found id: ""
	I0630 15:53:07.913307 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.913317 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:07.913325 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:07.913394 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:07.966418 1612198 cri.go:89] found id: ""
	I0630 15:53:07.966461 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.966475 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:07.966484 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:07.966554 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:08.017379 1612198 cri.go:89] found id: ""
	I0630 15:53:08.017443 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.017457 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:08.017465 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:08.017559 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:08.070396 1612198 cri.go:89] found id: ""
	I0630 15:53:08.070427 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.070440 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:08.070449 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:08.070519 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:08.118074 1612198 cri.go:89] found id: ""
	I0630 15:53:08.118118 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.118132 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:08.118146 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:08.118164 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:08.139695 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:08.139728 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:08.252659 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:08.252683 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:08.252698 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:08.381553 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:08.381602 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:08.448865 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:08.448912 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:11.032838 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:11.059173 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:11.059251 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:11.115790 1612198 cri.go:89] found id: ""
	I0630 15:53:11.115826 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.115839 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:11.115848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:11.115920 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:11.175246 1612198 cri.go:89] found id: ""
	I0630 15:53:11.175295 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.175307 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:11.175316 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:11.175389 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:11.230317 1612198 cri.go:89] found id: ""
	I0630 15:53:11.230349 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.230360 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:11.230368 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:11.230437 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:11.283786 1612198 cri.go:89] found id: ""
	I0630 15:53:11.283827 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.283839 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:11.283848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:11.283927 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:11.334412 1612198 cri.go:89] found id: ""
	I0630 15:53:11.334437 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.334445 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:11.334451 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:11.334508 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:11.399160 1612198 cri.go:89] found id: ""
	I0630 15:53:11.399195 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.399208 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:11.399218 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:11.399307 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:11.461034 1612198 cri.go:89] found id: ""
	I0630 15:53:11.461065 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.461078 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:11.461087 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:11.461144 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:11.509139 1612198 cri.go:89] found id: ""
	I0630 15:53:11.509169 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.509180 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:11.509194 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:11.509217 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:11.560268 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:11.560316 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:11.616198 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:11.616253 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:11.636775 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:11.636820 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:11.735910 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:11.735936 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:11.735954 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:08.922659 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:08.923323 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:08.923356 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:08.923288 1620899 retry.go:31] will retry after 3.297531276s: waiting for domain to come up
	I0630 15:53:12.222353 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:12.223035 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:12.223068 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:12.222990 1620899 retry.go:31] will retry after 3.51443735s: waiting for domain to come up
	I0630 15:53:17.014584 1619158 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 15:53:17.014637 1619158 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:53:17.014706 1619158 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:53:17.014838 1619158 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:53:17.014964 1619158 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 15:53:17.015057 1619158 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:53:17.016771 1619158 out.go:235]   - Generating certificates and keys ...
	I0630 15:53:17.016879 1619158 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:53:17.016954 1619158 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:53:17.017037 1619158 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 15:53:17.017140 1619158 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 15:53:17.017235 1619158 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 15:53:17.017318 1619158 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 15:53:17.017382 1619158 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 15:53:17.017508 1619158 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-668101 localhost] and IPs [192.168.50.164 127.0.0.1 ::1]
	I0630 15:53:17.017557 1619158 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 15:53:17.017714 1619158 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-668101 localhost] and IPs [192.168.50.164 127.0.0.1 ::1]
	I0630 15:53:17.017816 1619158 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 15:53:17.017907 1619158 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 15:53:17.017980 1619158 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 15:53:17.018051 1619158 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:53:17.018104 1619158 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:53:17.018164 1619158 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 15:53:17.018252 1619158 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:53:17.018322 1619158 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:53:17.018382 1619158 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:53:17.018488 1619158 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:53:17.018583 1619158 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:53:17.020268 1619158 out.go:235]   - Booting up control plane ...
	I0630 15:53:17.020370 1619158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:53:17.020449 1619158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:53:17.020523 1619158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:53:17.020623 1619158 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:53:17.020700 1619158 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:53:17.020739 1619158 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:53:17.020859 1619158 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 15:53:17.020953 1619158 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 15:53:17.021008 1619158 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.332284ms
	I0630 15:53:17.021092 1619158 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 15:53:17.021178 1619158 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.50.164:8443/livez
	I0630 15:53:17.021267 1619158 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 15:53:17.021346 1619158 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 15:53:17.021442 1619158 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.131571599s
	I0630 15:53:17.021510 1619158 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.852171886s
	I0630 15:53:17.021568 1619158 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002662518s
	I0630 15:53:17.021665 1619158 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 15:53:17.021773 1619158 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 15:53:17.021830 1619158 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 15:53:17.022015 1619158 kubeadm.go:310] [mark-control-plane] Marking the node flannel-668101 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 15:53:17.022075 1619158 kubeadm.go:310] [bootstrap-token] Using token: ux2a4n.m86z51knn5xjib22
	I0630 15:53:17.023469 1619158 out.go:235]   - Configuring RBAC rules ...
	I0630 15:53:17.023592 1619158 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 15:53:17.023701 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 15:53:17.023848 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 15:53:17.023981 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 15:53:17.024113 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 15:53:17.024200 1619158 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 15:53:17.024304 1619158 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 15:53:17.024347 1619158 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 15:53:17.024396 1619158 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 15:53:17.024424 1619158 kubeadm.go:310] 
	I0630 15:53:17.024503 1619158 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 15:53:17.024510 1619158 kubeadm.go:310] 
	I0630 15:53:17.024574 1619158 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 15:53:17.024580 1619158 kubeadm.go:310] 
	I0630 15:53:17.024600 1619158 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 15:53:17.024654 1619158 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 15:53:17.024696 1619158 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 15:53:17.024705 1619158 kubeadm.go:310] 
	I0630 15:53:17.024750 1619158 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 15:53:17.024756 1619158 kubeadm.go:310] 
	I0630 15:53:17.024799 1619158 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 15:53:17.024805 1619158 kubeadm.go:310] 
	I0630 15:53:17.024848 1619158 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 15:53:17.024952 1619158 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 15:53:17.025026 1619158 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 15:53:17.025033 1619158 kubeadm.go:310] 
	I0630 15:53:17.025114 1619158 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 15:53:17.025179 1619158 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 15:53:17.025185 1619158 kubeadm.go:310] 
	I0630 15:53:17.025258 1619158 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ux2a4n.m86z51knn5xjib22 \
	I0630 15:53:17.025350 1619158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 15:53:17.025370 1619158 kubeadm.go:310] 	--control-plane 
	I0630 15:53:17.025374 1619158 kubeadm.go:310] 
	I0630 15:53:17.025507 1619158 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 15:53:17.025515 1619158 kubeadm.go:310] 
	I0630 15:53:17.025583 1619158 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ux2a4n.m86z51knn5xjib22 \
	I0630 15:53:17.025707 1619158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 15:53:17.025719 1619158 cni.go:84] Creating CNI manager for "flannel"
	I0630 15:53:17.027099 1619158 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0630 15:53:14.327948 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:14.347007 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:14.347078 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:14.391736 1612198 cri.go:89] found id: ""
	I0630 15:53:14.391770 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.391782 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:14.391790 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:14.391855 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:14.438236 1612198 cri.go:89] found id: ""
	I0630 15:53:14.438274 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.438286 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:14.438294 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:14.438381 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:14.479508 1612198 cri.go:89] found id: ""
	I0630 15:53:14.479539 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.479550 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:14.479558 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:14.479618 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:14.530347 1612198 cri.go:89] found id: ""
	I0630 15:53:14.530386 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.530400 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:14.530409 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:14.530480 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:14.576356 1612198 cri.go:89] found id: ""
	I0630 15:53:14.576392 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.576404 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:14.576413 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:14.576491 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:14.627508 1612198 cri.go:89] found id: ""
	I0630 15:53:14.627546 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.627557 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:14.627565 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:14.627636 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:14.674780 1612198 cri.go:89] found id: ""
	I0630 15:53:14.674808 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.674824 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:14.674832 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:14.674899 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:14.717562 1612198 cri.go:89] found id: ""
	I0630 15:53:14.717599 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.717611 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:14.717624 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:14.717655 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:14.801031 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:14.801063 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:14.801083 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:14.890511 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:14.890559 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:14.953255 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:14.953300 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:15.023105 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:15.023160 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:17.543438 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:17.564446 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:17.564545 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:17.602287 1612198 cri.go:89] found id: ""
	I0630 15:53:17.602336 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.602349 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:17.602358 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:17.602449 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:17.643215 1612198 cri.go:89] found id: ""
	I0630 15:53:17.643246 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.643259 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:17.643266 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:17.643328 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:15.813970 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:15.814578 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:15.814693 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:15.814493 1620899 retry.go:31] will retry after 4.330770463s: waiting for domain to come up
	I0630 15:53:17.028285 1619158 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0630 15:53:17.034603 1619158 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.33.2/kubectl ...
	I0630 15:53:17.034627 1619158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0630 15:53:17.064463 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0630 15:53:17.543422 1619158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:53:17.543486 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:17.543598 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-668101 minikube.k8s.io/updated_at=2025_06_30T15_53_17_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=flannel-668101 minikube.k8s.io/primary=true
	I0630 15:53:17.594413 1619158 ops.go:34] apiserver oom_adj: -16
	I0630 15:53:17.727637 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:18.228526 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:18.727798 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:19.227728 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:19.728564 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:20.227759 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:20.728760 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:21.228341 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:21.728419 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:21.856237 1619158 kubeadm.go:1105] duration metric: took 4.312811681s to wait for elevateKubeSystemPrivileges
	I0630 15:53:21.856299 1619158 kubeadm.go:394] duration metric: took 18.199648133s to StartCluster
	I0630 15:53:21.856325 1619158 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:21.856421 1619158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:53:21.857563 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:21.857818 1619158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 15:53:21.857835 1619158 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:53:21.857909 1619158 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:53:21.858018 1619158 addons.go:69] Setting storage-provisioner=true in profile "flannel-668101"
	I0630 15:53:21.858038 1619158 addons.go:238] Setting addon storage-provisioner=true in "flannel-668101"
	I0630 15:53:21.858043 1619158 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:53:21.858037 1619158 addons.go:69] Setting default-storageclass=true in profile "flannel-668101"
	I0630 15:53:21.858077 1619158 host.go:66] Checking if "flannel-668101" exists ...
	I0630 15:53:21.858106 1619158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-668101"
	I0630 15:53:21.858566 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.858573 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.858594 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.858610 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.859497 1619158 out.go:177] * Verifying Kubernetes components...
	I0630 15:53:21.861465 1619158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:21.878756 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0630 15:53:21.879278 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.879431 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0630 15:53:21.879778 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.879797 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.879838 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.880325 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.880347 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.880358 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.880762 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.881385 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.881459 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.881515 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:53:21.885509 1619158 addons.go:238] Setting addon default-storageclass=true in "flannel-668101"
	I0630 15:53:21.885555 1619158 host.go:66] Checking if "flannel-668101" exists ...
	I0630 15:53:21.885936 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.885985 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.903264 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0630 15:53:21.903821 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.904198 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0630 15:53:21.904415 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.904440 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.904784 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.904851 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.905447 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.905503 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.906077 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.906103 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.906550 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.906795 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:53:21.913135 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:53:21.915545 1619158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:53:17.684398 1612198 cri.go:89] found id: ""
	I0630 15:53:17.684474 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.684484 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:17.684493 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:17.684567 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:17.734640 1612198 cri.go:89] found id: ""
	I0630 15:53:17.734681 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.734694 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:17.734702 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:17.734787 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:17.771368 1612198 cri.go:89] found id: ""
	I0630 15:53:17.771404 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.771416 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:17.771425 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:17.771497 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:17.828694 1612198 cri.go:89] found id: ""
	I0630 15:53:17.828724 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.828732 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:17.828741 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:17.828815 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:17.870487 1612198 cri.go:89] found id: ""
	I0630 15:53:17.870535 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.870549 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:17.870558 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:17.870639 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:17.907397 1612198 cri.go:89] found id: ""
	I0630 15:53:17.907430 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.907440 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:17.907451 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:17.907464 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:17.983887 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:17.983934 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:18.027406 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:18.027439 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:18.079092 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:18.079140 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:18.094309 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:18.094345 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:18.168726 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:20.669207 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:20.688479 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:20.688575 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:20.729290 1612198 cri.go:89] found id: ""
	I0630 15:53:20.729317 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.729327 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:20.729334 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:20.729399 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:20.772585 1612198 cri.go:89] found id: ""
	I0630 15:53:20.772606 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.772638 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:20.772647 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:20.772704 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:20.815369 1612198 cri.go:89] found id: ""
	I0630 15:53:20.815407 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.815419 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:20.815428 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:20.815490 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:20.856251 1612198 cri.go:89] found id: ""
	I0630 15:53:20.856282 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.856294 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:20.856304 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:20.856371 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:20.895690 1612198 cri.go:89] found id: ""
	I0630 15:53:20.895723 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.895732 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:20.895743 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:20.895823 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:20.938040 1612198 cri.go:89] found id: ""
	I0630 15:53:20.938075 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.938085 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:20.938094 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:20.938163 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:20.983241 1612198 cri.go:89] found id: ""
	I0630 15:53:20.983280 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.983293 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:20.983302 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:20.983373 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:21.029599 1612198 cri.go:89] found id: ""
	I0630 15:53:21.029633 1612198 logs.go:282] 0 containers: []
	W0630 15:53:21.029645 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:21.029659 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:21.029675 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:21.115729 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:21.115753 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:21.115766 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:21.192780 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:21.192824 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:21.238081 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:21.238141 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:21.298363 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:21.298437 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:20.150210 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.151081 1620744 main.go:141] libmachine: (bridge-668101) found domain IP: 192.168.72.11
	I0630 15:53:20.151108 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has current primary IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.151118 1620744 main.go:141] libmachine: (bridge-668101) reserving static IP address...
	I0630 15:53:20.151802 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find host DHCP lease matching {name: "bridge-668101", mac: "52:54:00:de:25:66", ip: "192.168.72.11"} in network mk-bridge-668101
	I0630 15:53:20.255604 1620744 main.go:141] libmachine: (bridge-668101) reserved static IP address 192.168.72.11 for domain bridge-668101
	I0630 15:53:20.255640 1620744 main.go:141] libmachine: (bridge-668101) waiting for SSH...
	I0630 15:53:20.255651 1620744 main.go:141] libmachine: (bridge-668101) DBG | Getting to WaitForSSH function...
	I0630 15:53:20.259016 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.259553 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.259578 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.259789 1620744 main.go:141] libmachine: (bridge-668101) DBG | Using SSH client type: external
	I0630 15:53:20.259817 1620744 main.go:141] libmachine: (bridge-668101) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa (-rw-------)
	I0630 15:53:20.259855 1620744 main.go:141] libmachine: (bridge-668101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:53:20.259878 1620744 main.go:141] libmachine: (bridge-668101) DBG | About to run SSH command:
	I0630 15:53:20.259893 1620744 main.go:141] libmachine: (bridge-668101) DBG | exit 0
	I0630 15:53:20.389637 1620744 main.go:141] libmachine: (bridge-668101) DBG | SSH cmd err, output: <nil>: 
	I0630 15:53:20.390056 1620744 main.go:141] libmachine: (bridge-668101) KVM machine creation complete
	I0630 15:53:20.390289 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetConfigRaw
	I0630 15:53:20.390852 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:20.391109 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:20.391342 1620744 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:53:20.391357 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:20.392814 1620744 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:53:20.392829 1620744 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:53:20.392834 1620744 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:53:20.392840 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.396358 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.396743 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.396783 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.397085 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.397290 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.397458 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.397650 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.397853 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.398148 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.398164 1620744 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:53:20.508895 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:53:20.508932 1620744 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:53:20.508944 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.512198 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.512629 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.512658 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.512888 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.513085 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.513290 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.513461 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.513609 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.513804 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.513814 1620744 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:53:20.626452 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:53:20.626583 1620744 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:53:20.626595 1620744 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:53:20.626603 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:53:20.626863 1620744 buildroot.go:166] provisioning hostname "bridge-668101"
	I0630 15:53:20.626886 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:53:20.627111 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.630431 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.631000 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.631029 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.631318 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.631539 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.631746 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.631891 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.632041 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.632253 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.632267 1620744 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-668101 && echo "bridge-668101" | sudo tee /etc/hostname
	I0630 15:53:20.768072 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-668101
	
	I0630 15:53:20.768109 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.772078 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.772554 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.772641 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.772981 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.773268 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.773482 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.773700 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.773939 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.774161 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.774183 1620744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-668101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-668101/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-668101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:53:20.912221 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:53:20.912262 1620744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:53:20.912306 1620744 buildroot.go:174] setting up certificates
	I0630 15:53:20.912324 1620744 provision.go:84] configureAuth start
	I0630 15:53:20.912343 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:53:20.912731 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:20.916012 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.916475 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.916519 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.916686 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.919828 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.920293 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.920328 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.920495 1620744 provision.go:143] copyHostCerts
	I0630 15:53:20.920585 1620744 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:53:20.920609 1620744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:53:20.920712 1620744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:53:20.920869 1620744 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:53:20.920882 1620744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:53:20.920919 1620744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:53:20.921008 1620744 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:53:20.921018 1620744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:53:20.921044 1620744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:53:20.921126 1620744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.bridge-668101 san=[127.0.0.1 192.168.72.11 bridge-668101 localhost minikube]
	I0630 15:53:21.264068 1620744 provision.go:177] copyRemoteCerts
	I0630 15:53:21.264165 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:53:21.264213 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.268086 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.268409 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.268452 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.268601 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.268924 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.269110 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.269238 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:21.361451 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:53:21.391187 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 15:53:21.419255 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:53:21.448237 1620744 provision.go:87] duration metric: took 535.893652ms to configureAuth
	I0630 15:53:21.448274 1620744 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:53:21.448476 1620744 config.go:182] Loaded profile config "bridge-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:53:21.448584 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.453284 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.453882 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.453912 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.454135 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.454353 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.454521 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.454680 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.454822 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:21.455051 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:21.455078 1620744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:53:21.715413 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:53:21.715442 1620744 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:53:21.715451 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetURL
	I0630 15:53:21.716819 1620744 main.go:141] libmachine: (bridge-668101) DBG | using libvirt version 6000000
	I0630 15:53:21.719440 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.719824 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.719856 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.719970 1620744 main.go:141] libmachine: Docker is up and running!
	I0630 15:53:21.719983 1620744 main.go:141] libmachine: Reticulating splines...
	I0630 15:53:21.719993 1620744 client.go:171] duration metric: took 25.917938791s to LocalClient.Create
	I0630 15:53:21.720027 1620744 start.go:167] duration metric: took 25.918028738s to libmachine.API.Create "bridge-668101"
	I0630 15:53:21.720040 1620744 start.go:293] postStartSetup for "bridge-668101" (driver="kvm2")
	I0630 15:53:21.720054 1620744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:53:21.720081 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.720445 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:53:21.720475 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.723380 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.723862 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.723895 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.724514 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.724885 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.725127 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.725432 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:21.813595 1620744 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:53:21.818546 1620744 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:53:21.818584 1620744 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:53:21.818645 1620744 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:53:21.818728 1620744 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:53:21.818833 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:53:21.830037 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:53:21.862135 1620744 start.go:296] duration metric: took 142.08086ms for postStartSetup
	I0630 15:53:21.862197 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetConfigRaw
	I0630 15:53:21.862968 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:21.866304 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.866720 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.866752 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.867254 1620744 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/config.json ...
	I0630 15:53:21.867599 1620744 start.go:128] duration metric: took 26.08874701s to createHost
	I0630 15:53:21.867640 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.870855 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.871356 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.871397 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.871563 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.871789 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.871989 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.872148 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.872344 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:21.872607 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:21.872619 1620744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:53:21.990814 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298801.970811827
	
	I0630 15:53:21.990846 1620744 fix.go:216] guest clock: 1751298801.970811827
	I0630 15:53:21.990856 1620744 fix.go:229] Guest: 2025-06-30 15:53:21.970811827 +0000 UTC Remote: 2025-06-30 15:53:21.867622048 +0000 UTC m=+38.958890662 (delta=103.189779ms)
	I0630 15:53:21.990888 1620744 fix.go:200] guest clock delta is within tolerance: 103.189779ms
	I0630 15:53:21.990895 1620744 start.go:83] releasing machines lock for "bridge-668101", held for 26.212259549s
	I0630 15:53:21.990921 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.991256 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:21.994862 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.995334 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.995365 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.995601 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.996174 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.996422 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.996540 1620744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:53:21.996586 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.996665 1620744 ssh_runner.go:195] Run: cat /version.json
	I0630 15:53:21.996697 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:22.000078 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.000431 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:22.000471 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.000574 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.000868 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:22.001096 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:22.001101 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:22.001197 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.001278 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:22.001303 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:22.001484 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:22.001499 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:22.001633 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:22.001809 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:22.115933 1620744 ssh_runner.go:195] Run: systemctl --version
	I0630 15:53:22.124264 1620744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:53:22.297158 1620744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:53:22.303464 1620744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:53:22.303535 1620744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:53:22.322898 1620744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:53:22.322933 1620744 start.go:495] detecting cgroup driver to use...
	I0630 15:53:22.323033 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:53:22.346693 1620744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:53:22.370685 1620744 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:53:22.370799 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:53:22.388014 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:53:22.405538 1620744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:53:22.556327 1620744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:53:22.736266 1620744 docker.go:246] disabling docker service ...
	I0630 15:53:22.736364 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:53:22.755856 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:53:22.773629 1620744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:53:21.916791 1619158 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:21.916818 1619158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 15:53:21.916850 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:53:21.920269 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.920634 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:53:21.920657 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.920814 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:53:21.921063 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.921260 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:53:21.921462 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:53:21.930939 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I0630 15:53:21.931592 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.932329 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.932352 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.932845 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.933076 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:53:21.935023 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:53:21.935343 1619158 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:21.935362 1619158 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 15:53:21.935385 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:53:21.938667 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.939066 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:53:21.939089 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.939228 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:53:21.939438 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.939561 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:53:21.939667 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:53:22.100716 1619158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 15:53:22.185715 1619158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:22.445585 1619158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:22.457596 1619158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:22.671125 1619158 start.go:972] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0630 15:53:22.672317 1619158 node_ready.go:35] waiting up to 15m0s for node "flannel-668101" to be "Ready" ...
	I0630 15:53:22.953479 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:22.953512 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:22.953863 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:22.953868 1619158 main.go:141] libmachine: (flannel-668101) DBG | Closing plugin on server side
	I0630 15:53:22.953885 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:22.953895 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:22.953902 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:22.954132 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:22.954147 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:22.966064 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:22.966091 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:22.966575 1619158 main.go:141] libmachine: (flannel-668101) DBG | Closing plugin on server side
	I0630 15:53:22.966595 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:22.966608 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:23.178366 1619158 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-668101" context rescaled to 1 replicas
	I0630 15:53:23.182951 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:23.182983 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:23.183310 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:23.183341 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:23.183352 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:23.183359 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:23.183771 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:23.183785 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:23.183846 1619158 main.go:141] libmachine: (flannel-668101) DBG | Closing plugin on server side
	I0630 15:53:23.185609 1619158 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0630 15:53:22.968973 1620744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:53:23.133301 1620744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:53:23.155249 1620744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:53:23.183726 1620744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:53:23.183827 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.198004 1620744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:53:23.198112 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.210920 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.223143 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.235289 1620744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:53:23.248292 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.260423 1620744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.280821 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.293185 1620744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:53:23.305009 1620744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:53:23.305155 1620744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:53:23.321828 1620744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:53:23.333118 1620744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:23.476277 1620744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:53:23.585009 1620744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:53:23.585109 1620744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:53:23.590082 1620744 start.go:563] Will wait 60s for crictl version
	I0630 15:53:23.590166 1620744 ssh_runner.go:195] Run: which crictl
	I0630 15:53:23.593975 1620744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:53:23.637313 1620744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:53:23.637475 1620744 ssh_runner.go:195] Run: crio --version
	I0630 15:53:23.668285 1620744 ssh_runner.go:195] Run: crio --version
	I0630 15:53:23.699975 1620744 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:53:23.186948 1619158 addons.go:514] duration metric: took 1.329044999s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0630 15:53:24.675577 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	I0630 15:53:23.816993 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:23.835380 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:23.835460 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:23.877562 1612198 cri.go:89] found id: ""
	I0630 15:53:23.877598 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.877610 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:23.877618 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:23.877695 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:23.919089 1612198 cri.go:89] found id: ""
	I0630 15:53:23.919130 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.919144 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:23.919152 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:23.919232 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:23.964835 1612198 cri.go:89] found id: ""
	I0630 15:53:23.964864 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.964875 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:23.964883 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:23.964956 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:24.011639 1612198 cri.go:89] found id: ""
	I0630 15:53:24.011680 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.011694 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:24.011704 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:24.011791 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:24.059206 1612198 cri.go:89] found id: ""
	I0630 15:53:24.059240 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.059250 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:24.059262 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:24.059335 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:24.116479 1612198 cri.go:89] found id: ""
	I0630 15:53:24.116517 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.116530 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:24.116540 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:24.116619 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:24.164108 1612198 cri.go:89] found id: ""
	I0630 15:53:24.164142 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.164153 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:24.164162 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:24.164235 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:24.232264 1612198 cri.go:89] found id: ""
	I0630 15:53:24.232299 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.232312 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:24.232325 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:24.232343 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:24.334546 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:24.334577 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:24.334597 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:24.450906 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:24.450963 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:24.523317 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:24.523361 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:24.609506 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:24.609547 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:27.134042 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:27.156543 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:27.156635 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:27.206777 1612198 cri.go:89] found id: ""
	I0630 15:53:27.206819 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.206831 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:27.206841 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:27.206924 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:27.257098 1612198 cri.go:89] found id: ""
	I0630 15:53:27.257141 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.257153 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:27.257162 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:27.257226 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:27.311101 1612198 cri.go:89] found id: ""
	I0630 15:53:27.311129 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.311137 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:27.311164 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:27.311233 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:27.356225 1612198 cri.go:89] found id: ""
	I0630 15:53:27.356264 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.356276 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:27.356285 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:27.356446 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:27.408114 1612198 cri.go:89] found id: ""
	I0630 15:53:27.408173 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.408185 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:27.408194 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:27.408264 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:27.453433 1612198 cri.go:89] found id: ""
	I0630 15:53:27.453471 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.453483 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:27.453491 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:27.453560 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:27.502170 1612198 cri.go:89] found id: ""
	I0630 15:53:27.502209 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.502222 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:27.502230 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:27.502304 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:27.539066 1612198 cri.go:89] found id: ""
	I0630 15:53:27.539104 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.539113 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:27.539124 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:27.539157 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:27.557767 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:27.557807 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:27.661895 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:27.661924 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:27.661943 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:23.701364 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:23.704233 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:23.704638 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:23.704669 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:23.704895 1620744 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0630 15:53:23.709158 1620744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:53:23.723315 1620744 kubeadm.go:875] updating cluster {Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:bridge-668
101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:53:23.723444 1620744 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:53:23.723509 1620744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:53:23.763562 1620744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 15:53:23.763659 1620744 ssh_runner.go:195] Run: which lz4
	I0630 15:53:23.769114 1620744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:53:23.774965 1620744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:53:23.775007 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 15:53:25.374857 1620744 crio.go:462] duration metric: took 1.60580082s to copy over tarball
	I0630 15:53:25.374981 1620744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:53:27.865991 1620744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.490972706s)
	I0630 15:53:27.866033 1620744 crio.go:469] duration metric: took 2.491137727s to extract the tarball
	I0630 15:53:27.866044 1620744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:53:27.908959 1620744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:53:27.960351 1620744 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:53:27.960383 1620744 cache_images.go:84] Images are preloaded, skipping loading
	I0630 15:53:27.960392 1620744 kubeadm.go:926] updating node { 192.168.72.11 8443 v1.33.2 crio true true} ...
	I0630 15:53:27.960497 1620744 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-668101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:bridge-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0630 15:53:27.960566 1620744 ssh_runner.go:195] Run: crio config
	I0630 15:53:28.007607 1620744 cni.go:84] Creating CNI manager for "bridge"
	I0630 15:53:28.007639 1620744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:53:28.007668 1620744 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-668101 NodeName:bridge-668101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:53:28.007874 1620744 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-668101"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.11"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:53:28.007956 1620744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 15:53:28.019439 1620744 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:53:28.019533 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:53:28.030681 1620744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0630 15:53:28.054217 1620744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:53:28.078657 1620744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0630 15:53:28.103175 1620744 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0630 15:53:28.107637 1620744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:53:28.121750 1620744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:28.271570 1620744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:28.301805 1620744 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101 for IP: 192.168.72.11
	I0630 15:53:28.301846 1620744 certs.go:194] generating shared ca certs ...
	I0630 15:53:28.301873 1620744 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.302109 1620744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:53:28.302183 1620744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:53:28.302206 1620744 certs.go:256] generating profile certs ...
	I0630 15:53:28.302293 1620744 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.key
	I0630 15:53:28.302316 1620744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt with IP's: []
	I0630 15:53:28.454855 1620744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt ...
	I0630 15:53:28.454891 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: {Name:mk937708224110c3dd03876ac97fd50296fa97e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.455077 1620744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.key ...
	I0630 15:53:28.455095 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.key: {Name:mkabac9afc77f4fa227e818a7db37dc6cde93101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.455181 1620744 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f
	I0630 15:53:28.455199 1620744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.11]
	I0630 15:53:28.535439 1620744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f ...
	I0630 15:53:28.535477 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f: {Name:mkb3d4c341f11f3a902e7d6409776e997bb9f0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.535666 1620744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f ...
	I0630 15:53:28.535680 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f: {Name:mkb836a4b78458ae1ce3c620e0b6b74aca7afa96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.535756 1620744 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt
	I0630 15:53:28.535850 1620744 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key
	I0630 15:53:28.535911 1620744 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key
	I0630 15:53:28.535927 1620744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt with IP's: []
	I0630 15:53:28.888408 1620744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt ...
	I0630 15:53:28.888451 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt: {Name:mkf4d3b4ec0f8a5e1d05a277edfc5ceb8007805d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.888663 1620744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key ...
	I0630 15:53:28.888680 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key: {Name:mk8ac529262f2861b6afd57f5e5bb4e1423ec462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.888902 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:53:28.888952 1620744 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:53:28.888967 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:53:28.889001 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:53:28.889037 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:53:28.889066 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:53:28.889125 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:53:28.889775 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:53:28.927242 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:53:28.967550 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:53:29.017537 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:53:29.055944 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 15:53:29.085822 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0630 15:53:29.183293 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:53:29.217912 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 15:53:29.249508 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:53:29.281853 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:53:29.312083 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:53:29.346274 1620744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:53:29.368862 1620744 ssh_runner.go:195] Run: openssl version
	I0630 15:53:29.376652 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:53:29.391675 1620744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:53:29.396844 1620744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:53:29.396917 1620744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:53:29.404281 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:53:29.417581 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:53:29.430622 1620744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:53:29.436093 1620744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:53:29.436174 1620744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:53:29.443611 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:53:29.457568 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:53:29.471747 1620744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:29.477296 1620744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:29.477380 1620744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:29.485268 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:53:29.498865 1620744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:53:29.504743 1620744 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:53:29.504819 1620744 kubeadm.go:392] StartCluster: {Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:bridge-668101
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:53:29.504990 1620744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:53:29.505114 1620744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:53:29.554378 1620744 cri.go:89] found id: ""
	I0630 15:53:29.554448 1620744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:53:29.566684 1620744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:53:29.580816 1620744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:53:29.594087 1620744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:53:29.594122 1620744 kubeadm.go:157] found existing configuration files:
	
	I0630 15:53:29.594198 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:53:29.606128 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:53:29.606208 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:53:29.617824 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:53:29.628760 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:53:29.628849 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:53:29.643046 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:53:29.654618 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:53:29.654744 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:53:29.670789 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:53:29.686439 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:53:29.686511 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:53:29.701021 1620744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:53:29.759278 1620744 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 15:53:29.759355 1620744 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:53:29.854960 1620744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:53:29.855106 1620744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:53:29.855286 1620744 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 15:53:29.866548 1620744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0630 15:53:27.181869 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	W0630 15:53:29.675930 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	I0630 15:53:27.767088 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:27.767156 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:27.814647 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:27.814683 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:30.372878 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:30.392885 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:30.392993 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:30.450197 1612198 cri.go:89] found id: ""
	I0630 15:53:30.450235 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.450248 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:30.450258 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:30.450342 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:30.507009 1612198 cri.go:89] found id: ""
	I0630 15:53:30.507041 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.507051 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:30.507060 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:30.507147 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:30.554455 1612198 cri.go:89] found id: ""
	I0630 15:53:30.554485 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.554496 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:30.554505 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:30.554572 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:30.598785 1612198 cri.go:89] found id: ""
	I0630 15:53:30.598821 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.598833 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:30.598841 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:30.598911 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:30.634661 1612198 cri.go:89] found id: ""
	I0630 15:53:30.634701 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.634713 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:30.634722 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:30.634794 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:30.674870 1612198 cri.go:89] found id: ""
	I0630 15:53:30.674903 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.674913 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:30.674922 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:30.674984 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:30.715843 1612198 cri.go:89] found id: ""
	I0630 15:53:30.715873 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.715882 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:30.715889 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:30.715947 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:30.752318 1612198 cri.go:89] found id: ""
	I0630 15:53:30.752356 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.752375 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:30.752390 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:30.752406 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:30.824741 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:30.824784 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:30.838605 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:30.838640 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:30.915839 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:30.915924 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:30.915959 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:30.999770 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:30.999820 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:29.943503 1620744 out.go:235]   - Generating certificates and keys ...
	I0630 15:53:29.943673 1620744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:53:29.943767 1620744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:53:30.013369 1620744 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 15:53:30.204256 1620744 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 15:53:30.247370 1620744 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 15:53:30.347086 1620744 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 15:53:30.905210 1620744 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 15:53:30.905417 1620744 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-668101 localhost] and IPs [192.168.72.11 127.0.0.1 ::1]
	I0630 15:53:30.977829 1620744 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 15:53:30.978113 1620744 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-668101 localhost] and IPs [192.168.72.11 127.0.0.1 ::1]
	I0630 15:53:31.175683 1620744 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 15:53:31.342818 1620744 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 15:53:32.050944 1620744 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 15:53:32.051027 1620744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:53:32.176724 1620744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:53:32.249204 1620744 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 15:53:32.600906 1620744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:53:33.139702 1620744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:53:33.541220 1620744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:53:33.541742 1620744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:53:33.544105 1620744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W0630 15:53:31.676642 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	I0630 15:53:32.675850 1619158 node_ready.go:49] node "flannel-668101" is "Ready"
	I0630 15:53:32.675909 1619158 node_ready.go:38] duration metric: took 10.003542336s for node "flannel-668101" to be "Ready" ...
	I0630 15:53:32.675929 1619158 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:53:32.676002 1619158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:32.701943 1619158 api_server.go:72] duration metric: took 10.844066824s to wait for apiserver process to appear ...
	I0630 15:53:32.701974 1619158 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:53:32.701996 1619158 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I0630 15:53:32.706791 1619158 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I0630 15:53:32.708016 1619158 api_server.go:141] control plane version: v1.33.2
	I0630 15:53:32.708046 1619158 api_server.go:131] duration metric: took 6.062225ms to wait for apiserver health ...
	I0630 15:53:32.708058 1619158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:53:32.716053 1619158 system_pods.go:59] 7 kube-system pods found
	I0630 15:53:32.716114 1619158 system_pods.go:61] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:32.716121 1619158 system_pods.go:61] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:32.716130 1619158 system_pods.go:61] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:32.716136 1619158 system_pods.go:61] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:32.716146 1619158 system_pods.go:61] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:32.716151 1619158 system_pods.go:61] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:32.716159 1619158 system_pods.go:61] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:32.716169 1619158 system_pods.go:74] duration metric: took 8.103111ms to wait for pod list to return data ...
	I0630 15:53:32.716184 1619158 default_sa.go:34] waiting for default service account to be created ...
	I0630 15:53:32.721014 1619158 default_sa.go:45] found service account: "default"
	I0630 15:53:32.721045 1619158 default_sa.go:55] duration metric: took 4.852192ms for default service account to be created ...
	I0630 15:53:32.721059 1619158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 15:53:32.729131 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:32.729169 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:32.729178 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:32.729186 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:32.729192 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:32.729197 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:32.729208 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:32.729215 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:32.729252 1619158 retry.go:31] will retry after 311.225306ms: missing components: kube-dns
	I0630 15:53:33.046517 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:33.046552 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:33.046558 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:33.046563 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:33.046567 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:33.046571 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:33.046574 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:33.046578 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:33.046594 1619158 retry.go:31] will retry after 361.483143ms: missing components: kube-dns
	I0630 15:53:33.413105 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:33.413142 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:33.413148 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:33.413154 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:33.413159 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:33.413163 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:33.413171 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:33.413175 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:33.413191 1619158 retry.go:31] will retry after 423.305566ms: missing components: kube-dns
	I0630 15:53:33.853206 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:33.853242 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:33.853259 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:33.853267 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:33.853272 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:33.853277 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:33.853282 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:33.853287 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:33.853305 1619158 retry.go:31] will retry after 554.816826ms: missing components: kube-dns
	I0630 15:53:34.414917 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:34.414989 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:34.415017 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:34.415029 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:34.415036 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:34.415042 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:34.415047 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:34.415057 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:34.415250 1619158 retry.go:31] will retry after 473.364986ms: missing components: kube-dns
	I0630 15:53:34.892811 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:34.892851 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:34.892857 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:34.892863 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:34.892866 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:34.892870 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:34.892873 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:34.892877 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:34.892893 1619158 retry.go:31] will retry after 582.108906ms: missing components: kube-dns
	I0630 15:53:33.553483 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:33.570047 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:33.570150 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:33.616739 1612198 cri.go:89] found id: ""
	I0630 15:53:33.616775 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.616788 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:33.616798 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:33.616865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:33.659234 1612198 cri.go:89] found id: ""
	I0630 15:53:33.659265 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.659277 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:33.659285 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:33.659353 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:33.697938 1612198 cri.go:89] found id: ""
	I0630 15:53:33.697977 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.697989 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:33.697997 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:33.698115 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:33.739043 1612198 cri.go:89] found id: ""
	I0630 15:53:33.739104 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.739118 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:33.739127 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:33.739200 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:33.781947 1612198 cri.go:89] found id: ""
	I0630 15:53:33.781983 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.781994 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:33.782006 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:33.782078 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:33.818201 1612198 cri.go:89] found id: ""
	I0630 15:53:33.818241 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.818254 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:33.818264 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:33.818336 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:33.865630 1612198 cri.go:89] found id: ""
	I0630 15:53:33.865767 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.865806 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:33.865851 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:33.865966 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:33.905740 1612198 cri.go:89] found id: ""
	I0630 15:53:33.905807 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.905821 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:33.905834 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:33.905852 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:33.978403 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:33.978451 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:34.000180 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:34.000225 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:34.077381 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:34.077433 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:34.077451 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:34.158516 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:34.158571 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:36.703046 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:36.725942 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:36.726033 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:36.769910 1612198 cri.go:89] found id: ""
	I0630 15:53:36.770040 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.770066 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:36.770075 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:36.770150 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:36.817303 1612198 cri.go:89] found id: ""
	I0630 15:53:36.817339 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.817350 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:36.817358 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:36.817442 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:36.852676 1612198 cri.go:89] found id: ""
	I0630 15:53:36.852721 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.852734 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:36.852743 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:36.852811 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:36.896796 1612198 cri.go:89] found id: ""
	I0630 15:53:36.896829 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.896840 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:36.896848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:36.896929 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:36.932669 1612198 cri.go:89] found id: ""
	I0630 15:53:36.932708 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.932720 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:36.932729 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:36.932810 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:36.972728 1612198 cri.go:89] found id: ""
	I0630 15:53:36.972762 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.972773 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:36.972781 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:36.972855 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:37.009554 1612198 cri.go:89] found id: ""
	I0630 15:53:37.009594 1612198 logs.go:282] 0 containers: []
	W0630 15:53:37.009605 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:37.009614 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:37.009688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:37.047124 1612198 cri.go:89] found id: ""
	I0630 15:53:37.047163 1612198 logs.go:282] 0 containers: []
	W0630 15:53:37.047175 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:37.047188 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:37.047204 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:37.110372 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:37.110427 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:37.127309 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:37.127352 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:37.196740 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:37.196770 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:37.196793 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:37.284276 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:37.284322 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:33.546215 1620744 out.go:235]   - Booting up control plane ...
	I0630 15:53:33.546374 1620744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:53:33.546471 1620744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:53:33.546551 1620744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:53:33.567048 1620744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:53:33.573691 1620744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:53:33.573744 1620744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:53:33.768543 1620744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 15:53:33.768723 1620744 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 15:53:34.769251 1620744 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001331666s
	I0630 15:53:34.771797 1620744 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 15:53:34.771934 1620744 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.72.11:8443/livez
	I0630 15:53:34.772075 1620744 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 15:53:34.772163 1620744 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 15:53:37.720863 1620744 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.949703734s
	I0630 15:53:38.248441 1620744 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.477609557s
	I0630 15:53:40.275015 1620744 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.504420612s
	I0630 15:53:40.295071 1620744 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 15:53:40.318773 1620744 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 15:53:40.357954 1620744 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 15:53:40.358269 1620744 kubeadm.go:310] [mark-control-plane] Marking the node bridge-668101 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 15:53:40.377248 1620744 kubeadm.go:310] [bootstrap-token] Using token: ay7ggg.v4lz4n8lgdcwzb1z
	I0630 15:53:35.480398 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:35.480445 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:35.480453 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:35.480460 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:35.480466 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:35.480472 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:35.480477 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:35.480481 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:35.480501 1619158 retry.go:31] will retry after 722.350023ms: missing components: kube-dns
	I0630 15:53:36.207319 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:36.207354 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:36.207360 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:36.207367 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:36.207372 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:36.207376 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:36.207379 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:36.207384 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:36.207401 1619158 retry.go:31] will retry after 1.469551324s: missing components: kube-dns
	I0630 15:53:37.682415 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:37.682461 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:37.682470 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:37.682479 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:37.682484 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:37.682491 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:37.682496 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:37.682501 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:37.682522 1619158 retry.go:31] will retry after 1.601843725s: missing components: kube-dns
	I0630 15:53:39.289676 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:39.289721 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:39.289731 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:39.289741 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:39.289748 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:39.289753 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:39.289759 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:39.289763 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:39.289786 1619158 retry.go:31] will retry after 1.660514017s: missing components: kube-dns
	I0630 15:53:40.379081 1620744 out.go:235]   - Configuring RBAC rules ...
	I0630 15:53:40.379262 1620744 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 15:53:40.390839 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 15:53:40.406448 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 15:53:40.414176 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 15:53:40.420005 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 15:53:40.424273 1620744 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 15:53:40.682394 1620744 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 15:53:41.124826 1620744 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 15:53:41.682390 1620744 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 15:53:41.683365 1620744 kubeadm.go:310] 
	I0630 15:53:41.683473 1620744 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 15:53:41.683509 1620744 kubeadm.go:310] 
	I0630 15:53:41.683630 1620744 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 15:53:41.683647 1620744 kubeadm.go:310] 
	I0630 15:53:41.683685 1620744 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 15:53:41.683760 1620744 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 15:53:41.683843 1620744 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 15:53:41.683852 1620744 kubeadm.go:310] 
	I0630 15:53:41.683934 1620744 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 15:53:41.683943 1620744 kubeadm.go:310] 
	I0630 15:53:41.684007 1620744 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 15:53:41.684021 1620744 kubeadm.go:310] 
	I0630 15:53:41.684099 1620744 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 15:53:41.684203 1620744 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 15:53:41.684332 1620744 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 15:53:41.684349 1620744 kubeadm.go:310] 
	I0630 15:53:41.684477 1620744 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 15:53:41.684586 1620744 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 15:53:41.684594 1620744 kubeadm.go:310] 
	I0630 15:53:41.684715 1620744 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ay7ggg.v4lz4n8lgdcwzb1z \
	I0630 15:53:41.684897 1620744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 15:53:41.684947 1620744 kubeadm.go:310] 	--control-plane 
	I0630 15:53:41.684960 1620744 kubeadm.go:310] 
	I0630 15:53:41.685080 1620744 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 15:53:41.685101 1620744 kubeadm.go:310] 
	I0630 15:53:41.685204 1620744 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ay7ggg.v4lz4n8lgdcwzb1z \
	I0630 15:53:41.685345 1620744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 15:53:41.686851 1620744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:53:41.686884 1620744 cni.go:84] Creating CNI manager for "bridge"
	I0630 15:53:41.688726 1620744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 15:53:39.832609 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:39.849706 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:39.849794 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:39.893352 1612198 cri.go:89] found id: ""
	I0630 15:53:39.893391 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.893433 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:39.893442 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:39.893515 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:39.932840 1612198 cri.go:89] found id: ""
	I0630 15:53:39.932868 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.932876 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:39.932890 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:39.932955 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:39.981060 1612198 cri.go:89] found id: ""
	I0630 15:53:39.981097 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.981109 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:39.981117 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:39.981203 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:40.018727 1612198 cri.go:89] found id: ""
	I0630 15:53:40.018768 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.018781 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:40.018790 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:40.018863 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:40.061585 1612198 cri.go:89] found id: ""
	I0630 15:53:40.061627 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.061640 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:40.061649 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:40.061743 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:40.105417 1612198 cri.go:89] found id: ""
	I0630 15:53:40.105448 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.105456 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:40.105464 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:40.105527 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:40.141656 1612198 cri.go:89] found id: ""
	I0630 15:53:40.141686 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.141697 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:40.141705 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:40.141775 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:40.179978 1612198 cri.go:89] found id: ""
	I0630 15:53:40.180011 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.180020 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:40.180029 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:40.180042 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:40.197879 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:40.197924 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:40.271201 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:40.271257 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:40.271277 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:40.355166 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:40.355211 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:40.408985 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:40.409023 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:41.690209 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 15:53:41.702679 1620744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 15:53:41.734200 1620744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:53:41.734327 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:41.734404 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-668101 minikube.k8s.io/updated_at=2025_06_30T15_53_41_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=bridge-668101 minikube.k8s.io/primary=true
	I0630 15:53:41.895628 1620744 ops.go:34] apiserver oom_adj: -16
	I0630 15:53:41.895917 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:42.396198 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:42.896761 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:40.954924 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:40.954967 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:40.954975 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:40.954985 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:40.954990 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:40.954996 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:40.955000 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:40.955005 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:40.955026 1619158 retry.go:31] will retry after 2.638740648s: missing components: kube-dns
	I0630 15:53:43.598079 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:43.598113 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:43.598119 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:43.598126 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:43.598130 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:43.598134 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:43.598137 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:43.598140 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:43.598162 1619158 retry.go:31] will retry after 3.489845888s: missing components: kube-dns
	I0630 15:53:43.396863 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:43.896228 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:44.396818 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:44.896130 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:45.396432 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:45.896985 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:46.000729 1620744 kubeadm.go:1105] duration metric: took 4.266473213s to wait for elevateKubeSystemPrivileges
	I0630 15:53:46.000792 1620744 kubeadm.go:394] duration metric: took 16.495976664s to StartCluster
	I0630 15:53:46.000825 1620744 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:46.000948 1620744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:53:46.002167 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:46.002462 1620744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 15:53:46.002466 1620744 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:53:46.002560 1620744 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:53:46.002667 1620744 addons.go:69] Setting storage-provisioner=true in profile "bridge-668101"
	I0630 15:53:46.002692 1620744 addons.go:238] Setting addon storage-provisioner=true in "bridge-668101"
	I0630 15:53:46.002713 1620744 addons.go:69] Setting default-storageclass=true in profile "bridge-668101"
	I0630 15:53:46.002742 1620744 host.go:66] Checking if "bridge-668101" exists ...
	I0630 15:53:46.002766 1620744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-668101"
	I0630 15:53:46.002725 1620744 config.go:182] Loaded profile config "bridge-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:53:46.003139 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.003182 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.003225 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.003269 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.004052 1620744 out.go:177] * Verifying Kubernetes components...
	I0630 15:53:46.005665 1620744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:46.020307 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0630 15:53:46.021011 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.021601 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.021625 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.021987 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.022574 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.022627 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.026416 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0630 15:53:46.027718 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.028783 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.028829 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.029604 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.029867 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:46.035884 1620744 addons.go:238] Setting addon default-storageclass=true in "bridge-668101"
	I0630 15:53:46.035944 1620744 host.go:66] Checking if "bridge-668101" exists ...
	I0630 15:53:46.036350 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.036409 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.039472 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
	I0630 15:53:46.040012 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.040664 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.040690 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.041066 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.041289 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:46.043282 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:46.045535 1620744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:53:42.967786 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:42.987531 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:42.987625 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:43.023328 1612198 cri.go:89] found id: ""
	I0630 15:53:43.023360 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.023370 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:43.023377 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:43.023449 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:43.059730 1612198 cri.go:89] found id: ""
	I0630 15:53:43.059774 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.059785 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:43.059793 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:43.059875 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:43.100987 1612198 cri.go:89] found id: ""
	I0630 15:53:43.101024 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.101036 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:43.101045 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:43.101118 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:43.139556 1612198 cri.go:89] found id: ""
	I0630 15:53:43.139591 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.139603 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:43.139611 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:43.139669 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:43.177647 1612198 cri.go:89] found id: ""
	I0630 15:53:43.177677 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.177686 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:43.177692 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:43.177749 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:43.214354 1612198 cri.go:89] found id: ""
	I0630 15:53:43.214388 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.214400 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:43.214407 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:43.214475 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:43.254332 1612198 cri.go:89] found id: ""
	I0630 15:53:43.254364 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.254376 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:43.254393 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:43.254459 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:43.292194 1612198 cri.go:89] found id: ""
	I0630 15:53:43.292224 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.292232 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:43.292243 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:43.292255 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:43.345690 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:43.345732 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:43.360155 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:43.360191 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:43.441505 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:43.441537 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:43.441554 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:43.527009 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:43.527063 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:46.069596 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:46.092563 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:46.092646 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:46.132093 1612198 cri.go:89] found id: ""
	I0630 15:53:46.132131 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.132144 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:46.132153 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:46.132225 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:46.175509 1612198 cri.go:89] found id: ""
	I0630 15:53:46.175544 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.175556 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:46.175565 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:46.175647 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:46.225442 1612198 cri.go:89] found id: ""
	I0630 15:53:46.225478 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.225490 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:46.225502 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:46.225573 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:46.275070 1612198 cri.go:89] found id: ""
	I0630 15:53:46.275109 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.275122 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:46.275131 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:46.275206 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:46.320084 1612198 cri.go:89] found id: ""
	I0630 15:53:46.320116 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.320126 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:46.320133 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:46.320198 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:46.360602 1612198 cri.go:89] found id: ""
	I0630 15:53:46.360682 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.360699 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:46.360711 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:46.360818 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:46.404187 1612198 cri.go:89] found id: ""
	I0630 15:53:46.404222 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.404231 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:46.404238 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:46.404304 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:46.457761 1612198 cri.go:89] found id: ""
	I0630 15:53:46.457803 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.457820 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:46.457835 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:46.457855 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:46.524526 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:46.524574 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:46.542938 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:46.542974 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:46.620336 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:46.620372 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:46.620386 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:46.706447 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:46.706496 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:46.047099 1620744 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:46.047127 1620744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 15:53:46.047171 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:46.051881 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.052589 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:46.052618 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.052990 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:46.053240 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:46.053473 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:46.053666 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:46.055796 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0630 15:53:46.056603 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.057196 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.057218 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.057663 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.058201 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.058252 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.078886 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
	I0630 15:53:46.079821 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.080456 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.080484 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.080941 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.081233 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:46.083743 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:46.084008 1620744 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:46.084024 1620744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 15:53:46.084042 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:46.088653 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.089277 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:46.089310 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.089516 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:46.089752 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:46.090006 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:46.090184 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:46.376641 1620744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:46.376679 1620744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 15:53:46.468914 1620744 node_ready.go:35] waiting up to 15m0s for node "bridge-668101" to be "Ready" ...
	I0630 15:53:46.483783 1620744 node_ready.go:49] node "bridge-668101" is "Ready"
	I0630 15:53:46.483830 1620744 node_ready.go:38] duration metric: took 14.870889ms for node "bridge-668101" to be "Ready" ...
	I0630 15:53:46.483849 1620744 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:53:46.483904 1620744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:46.639045 1620744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:46.707352 1620744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:47.223014 1620744 start.go:972] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0630 15:53:47.223081 1620744 api_server.go:72] duration metric: took 1.2205745s to wait for apiserver process to appear ...
	I0630 15:53:47.223099 1620744 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:53:47.223143 1620744 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I0630 15:53:47.223206 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.223233 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.223657 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.223694 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.223705 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.223713 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.223714 1620744 main.go:141] libmachine: (bridge-668101) DBG | Closing plugin on server side
	I0630 15:53:47.223963 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.224017 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.223999 1620744 main.go:141] libmachine: (bridge-668101) DBG | Closing plugin on server side
	I0630 15:53:47.242476 1620744 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I0630 15:53:47.244520 1620744 api_server.go:141] control plane version: v1.33.2
	I0630 15:53:47.244556 1620744 api_server.go:131] duration metric: took 21.449815ms to wait for apiserver health ...
	I0630 15:53:47.244567 1620744 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:53:47.260743 1620744 system_pods.go:59] 7 kube-system pods found
	I0630 15:53:47.260790 1620744 system_pods.go:61] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.260803 1620744 system_pods.go:61] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.260813 1620744 system_pods.go:61] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.260822 1620744 system_pods.go:61] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.260833 1620744 system_pods.go:61] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.260847 1620744 system_pods.go:61] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.260855 1620744 system_pods.go:61] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.260862 1620744 system_pods.go:74] duration metric: took 16.289084ms to wait for pod list to return data ...
	I0630 15:53:47.260873 1620744 default_sa.go:34] waiting for default service account to be created ...
	I0630 15:53:47.265456 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.265485 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.265804 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.265825 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.265828 1620744 main.go:141] libmachine: (bridge-668101) DBG | Closing plugin on server side
	I0630 15:53:47.273837 1620744 default_sa.go:45] found service account: "default"
	I0630 15:53:47.273880 1620744 default_sa.go:55] duration metric: took 12.997202ms for default service account to be created ...
	I0630 15:53:47.273895 1620744 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 15:53:47.345061 1620744 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:47.345113 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.345126 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.345134 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.345144 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.345154 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.345162 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.345175 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.345223 1620744 retry.go:31] will retry after 281.101886ms: missing components: kube-dns
	I0630 15:53:47.638563 1620744 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:47.638608 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.638620 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.638628 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.638637 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.638647 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.638656 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.638663 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.638680 1620744 retry.go:31] will retry after 257.359626ms: missing components: kube-dns
	I0630 15:53:47.705752 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.705779 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.706118 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.706145 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.706176 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.706184 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.706445 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.706459 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.709137 1620744 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0630 15:53:47.710580 1620744 addons.go:514] duration metric: took 1.708021313s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0630 15:53:47.727425 1620744 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-668101" context rescaled to 1 replicas
	I0630 15:53:47.901617 1620744 system_pods.go:86] 8 kube-system pods found
	I0630 15:53:47.901662 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.901673 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.901680 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.901689 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.901699 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.901705 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.901716 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.901729 1620744 system_pods.go:89] "storage-provisioner" [d39eade7-d69c-4ba1-871c-9d22e90f3162] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:47.901756 1620744 retry.go:31] will retry after 361.046684ms: missing components: kube-dns
	I0630 15:53:47.092203 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:47.092247 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Running
	I0630 15:53:47.092256 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:47.092261 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:47.092266 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:47.092272 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:47.092279 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:47.092285 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:47.092297 1619158 system_pods.go:126] duration metric: took 14.371230346s to wait for k8s-apps to be running ...
	I0630 15:53:47.092315 1619158 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 15:53:47.092395 1619158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:53:47.107330 1619158 system_svc.go:56] duration metric: took 14.999723ms WaitForService to wait for kubelet
	I0630 15:53:47.107386 1619158 kubeadm.go:578] duration metric: took 25.24951704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:53:47.107425 1619158 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:53:47.111477 1619158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:53:47.111513 1619158 node_conditions.go:123] node cpu capacity is 2
	I0630 15:53:47.111531 1619158 node_conditions.go:105] duration metric: took 4.099412ms to run NodePressure ...
	I0630 15:53:47.111548 1619158 start.go:241] waiting for startup goroutines ...
	I0630 15:53:47.111557 1619158 start.go:246] waiting for cluster config update ...
	I0630 15:53:47.111572 1619158 start.go:255] writing updated cluster config ...
	I0630 15:53:47.111942 1619158 ssh_runner.go:195] Run: rm -f paused
	I0630 15:53:47.118482 1619158 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:47.122226 1619158 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-zlnjm" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.126835 1619158 pod_ready.go:94] pod "coredns-674b8bbfcf-zlnjm" is "Ready"
	I0630 15:53:47.126873 1619158 pod_ready.go:86] duration metric: took 4.619265ms for pod "coredns-674b8bbfcf-zlnjm" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.129263 1619158 pod_ready.go:83] waiting for pod "etcd-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.133727 1619158 pod_ready.go:94] pod "etcd-flannel-668101" is "Ready"
	I0630 15:53:47.133762 1619158 pod_ready.go:86] duration metric: took 4.469718ms for pod "etcd-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.135699 1619158 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.140237 1619158 pod_ready.go:94] pod "kube-apiserver-flannel-668101" is "Ready"
	I0630 15:53:47.140273 1619158 pod_ready.go:86] duration metric: took 4.536145ms for pod "kube-apiserver-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.143805 1619158 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.524212 1619158 pod_ready.go:94] pod "kube-controller-manager-flannel-668101" is "Ready"
	I0630 15:53:47.524250 1619158 pod_ready.go:86] duration metric: took 380.412398ms for pod "kube-controller-manager-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.723808 1619158 pod_ready.go:83] waiting for pod "kube-proxy-fl9rb" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.122925 1619158 pod_ready.go:94] pod "kube-proxy-fl9rb" is "Ready"
	I0630 15:53:48.122960 1619158 pod_ready.go:86] duration metric: took 399.120603ms for pod "kube-proxy-fl9rb" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.323641 1619158 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.722788 1619158 pod_ready.go:94] pod "kube-scheduler-flannel-668101" is "Ready"
	I0630 15:53:48.722822 1619158 pod_ready.go:86] duration metric: took 399.155106ms for pod "kube-scheduler-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.722836 1619158 pod_ready.go:40] duration metric: took 1.604308968s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:48.771506 1619158 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:53:48.774098 1619158 out.go:177] * Done! kubectl is now configured to use "flannel-668101" cluster and "default" namespace by default
	I0630 15:53:49.256833 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:49.276256 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:49.276328 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:49.326292 1612198 cri.go:89] found id: ""
	I0630 15:53:49.326327 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.326339 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:49.326356 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:49.326427 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:49.371428 1612198 cri.go:89] found id: ""
	I0630 15:53:49.371486 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.371496 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:49.371503 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:49.371568 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:49.415763 1612198 cri.go:89] found id: ""
	I0630 15:53:49.415840 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.415855 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:49.415864 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:49.415927 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:49.456276 1612198 cri.go:89] found id: ""
	I0630 15:53:49.456313 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.456324 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:49.456332 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:49.456421 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:49.496696 1612198 cri.go:89] found id: ""
	I0630 15:53:49.496735 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.496753 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:49.496762 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:49.496819 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:49.537728 1612198 cri.go:89] found id: ""
	I0630 15:53:49.537763 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.537771 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:49.537778 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:49.537837 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:49.575693 1612198 cri.go:89] found id: ""
	I0630 15:53:49.575725 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.575734 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:49.575740 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:49.575795 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:49.617896 1612198 cri.go:89] found id: ""
	I0630 15:53:49.617931 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.617941 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:49.617967 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:49.617986 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:49.668327 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:49.668372 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:49.721223 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:49.721270 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:49.737061 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:49.737094 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:49.814464 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:49.814490 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:49.814503 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:52.393329 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:52.409925 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:52.410010 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:52.446622 1612198 cri.go:89] found id: ""
	I0630 15:53:52.446659 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.446673 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:52.446684 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:52.446769 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:52.493894 1612198 cri.go:89] found id: ""
	I0630 15:53:52.493929 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.493940 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:52.493947 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:52.494012 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:52.530891 1612198 cri.go:89] found id: ""
	I0630 15:53:52.530943 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.530956 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:52.530965 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:52.531141 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:52.569016 1612198 cri.go:89] found id: ""
	I0630 15:53:52.569046 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.569054 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:52.569068 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:52.569144 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:52.607137 1612198 cri.go:89] found id: ""
	I0630 15:53:52.607176 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.607186 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:52.607194 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:52.607264 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:52.655286 1612198 cri.go:89] found id: ""
	I0630 15:53:52.655334 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.655343 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:52.655350 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:52.655420 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:48.266876 1620744 system_pods.go:86] 8 kube-system pods found
	I0630 15:53:48.266910 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:48.266917 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:48.266923 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:48.266928 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:48.266936 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:48.266940 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:48.266944 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:48.266949 1620744 system_pods.go:89] "storage-provisioner" [d39eade7-d69c-4ba1-871c-9d22e90f3162] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:48.266959 1620744 system_pods.go:126] duration metric: took 993.056385ms to wait for k8s-apps to be running ...
	I0630 15:53:48.266967 1620744 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 15:53:48.267016 1620744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:53:48.282778 1620744 system_svc.go:56] duration metric: took 15.79609ms WaitForService to wait for kubelet
	I0630 15:53:48.282832 1620744 kubeadm.go:578] duration metric: took 2.28032496s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:53:48.282860 1620744 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:53:48.286721 1620744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:53:48.286750 1620744 node_conditions.go:123] node cpu capacity is 2
	I0630 15:53:48.286764 1620744 node_conditions.go:105] duration metric: took 3.897099ms to run NodePressure ...
	I0630 15:53:48.286777 1620744 start.go:241] waiting for startup goroutines ...
	I0630 15:53:48.286784 1620744 start.go:246] waiting for cluster config update ...
	I0630 15:53:48.286794 1620744 start.go:255] writing updated cluster config ...
	I0630 15:53:48.287052 1620744 ssh_runner.go:195] Run: rm -f paused
	I0630 15:53:48.292293 1620744 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:48.297080 1620744 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-hggsr" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:53:50.309473 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-hggsr" is not "Ready", error: <nil>
	W0630 15:53:52.803327 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-hggsr" is not "Ready", error: <nil>
	I0630 15:53:52.693017 1612198 cri.go:89] found id: ""
	I0630 15:53:52.693053 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.693066 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:52.693093 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:52.693156 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:52.729639 1612198 cri.go:89] found id: ""
	I0630 15:53:52.729674 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.729685 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:52.729713 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:52.729731 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:52.744808 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:52.744846 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:52.818006 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:52.818076 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:52.818095 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:52.913720 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:52.913794 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:52.955851 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:52.955898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:55.506514 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:55.523943 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:55.524024 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:55.562846 1612198 cri.go:89] found id: ""
	I0630 15:53:55.562884 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.562893 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:55.562900 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:55.562960 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:55.601862 1612198 cri.go:89] found id: ""
	I0630 15:53:55.601895 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.601907 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:55.601915 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:55.601988 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:55.650904 1612198 cri.go:89] found id: ""
	I0630 15:53:55.650946 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.650958 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:55.650968 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:55.651051 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:55.695050 1612198 cri.go:89] found id: ""
	I0630 15:53:55.695081 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.695089 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:55.695096 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:55.695167 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:55.732863 1612198 cri.go:89] found id: ""
	I0630 15:53:55.732904 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.732917 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:55.732925 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:55.732997 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:55.772221 1612198 cri.go:89] found id: ""
	I0630 15:53:55.772254 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.772265 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:55.772275 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:55.772349 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:55.811091 1612198 cri.go:89] found id: ""
	I0630 15:53:55.811134 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.811146 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:55.811154 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:55.811213 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:55.846273 1612198 cri.go:89] found id: ""
	I0630 15:53:55.846313 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.846338 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:55.846352 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:55.846370 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:55.921797 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:55.921845 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:55.963517 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:55.963553 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:56.023942 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:56.023988 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:56.038647 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:56.038687 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:56.119572 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0630 15:53:55.303307 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-hggsr" is not "Ready", error: <nil>
	I0630 15:53:55.805200 1620744 pod_ready.go:94] pod "coredns-674b8bbfcf-hggsr" is "Ready"
	I0630 15:53:55.805235 1620744 pod_ready.go:86] duration metric: took 7.508115108s for pod "coredns-674b8bbfcf-hggsr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:55.805249 1620744 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:53:57.811769 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-qt9bv" is not "Ready", error: <nil>
	I0630 15:53:58.309220 1620744 pod_ready.go:99] pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace is gone: getting pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace (will retry): pods "coredns-674b8bbfcf-qt9bv" not found
	I0630 15:53:58.309253 1620744 pod_ready.go:86] duration metric: took 2.5039962s for pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.311407 1620744 pod_ready.go:83] waiting for pod "etcd-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.315815 1620744 pod_ready.go:94] pod "etcd-bridge-668101" is "Ready"
	I0630 15:53:58.315845 1620744 pod_ready.go:86] duration metric: took 4.413088ms for pod "etcd-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.317890 1620744 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.321951 1620744 pod_ready.go:94] pod "kube-apiserver-bridge-668101" is "Ready"
	I0630 15:53:58.322004 1620744 pod_ready.go:86] duration metric: took 4.070763ms for pod "kube-apiserver-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.325941 1620744 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.330240 1620744 pod_ready.go:94] pod "kube-controller-manager-bridge-668101" is "Ready"
	I0630 15:53:58.330273 1620744 pod_ready.go:86] duration metric: took 4.307436ms for pod "kube-controller-manager-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.509388 1620744 pod_ready.go:83] waiting for pod "kube-proxy-q2tjj" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.911133 1620744 pod_ready.go:94] pod "kube-proxy-q2tjj" is "Ready"
	I0630 15:53:58.911181 1620744 pod_ready.go:86] duration metric: took 401.753348ms for pod "kube-proxy-q2tjj" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:59.110354 1620744 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:59.509728 1620744 pod_ready.go:94] pod "kube-scheduler-bridge-668101" is "Ready"
	I0630 15:53:59.509764 1620744 pod_ready.go:86] duration metric: took 399.372679ms for pod "kube-scheduler-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:59.509778 1620744 pod_ready.go:40] duration metric: took 11.217429269s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:59.557513 1620744 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:53:59.559079 1620744 out.go:177] * Done! kubectl is now configured to use "bridge-668101" cluster and "default" namespace by default
	I0630 15:53:58.620232 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:58.638119 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:58.638194 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:58.674101 1612198 cri.go:89] found id: ""
	I0630 15:53:58.674160 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.674175 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:58.674184 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:58.674259 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:58.712115 1612198 cri.go:89] found id: ""
	I0630 15:53:58.712167 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.712179 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:58.712192 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:58.712261 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:58.766961 1612198 cri.go:89] found id: ""
	I0630 15:53:58.767004 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.767016 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:58.767025 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:58.767114 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:58.817233 1612198 cri.go:89] found id: ""
	I0630 15:53:58.817274 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.817286 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:58.817297 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:58.817379 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:58.858728 1612198 cri.go:89] found id: ""
	I0630 15:53:58.858757 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.858774 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:58.858784 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:58.858842 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:58.900041 1612198 cri.go:89] found id: ""
	I0630 15:53:58.900082 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.900094 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:58.900102 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:58.900176 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:58.944995 1612198 cri.go:89] found id: ""
	I0630 15:53:58.945026 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.945037 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:58.945046 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:58.945110 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:58.987156 1612198 cri.go:89] found id: ""
	I0630 15:53:58.987204 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.987216 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:58.987233 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:58.987252 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:59.054774 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:59.054821 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:59.071556 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:59.071601 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:59.144600 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:59.144631 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:59.144644 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:59.218471 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:59.218519 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:01.761632 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:01.781793 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:01.781885 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:01.834337 1612198 cri.go:89] found id: ""
	I0630 15:54:01.834370 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.834381 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:01.834390 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:01.834456 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:01.879488 1612198 cri.go:89] found id: ""
	I0630 15:54:01.879528 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.879542 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:01.879552 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:01.879629 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:01.919612 1612198 cri.go:89] found id: ""
	I0630 15:54:01.919656 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.919671 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:01.919681 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:01.919755 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:01.959025 1612198 cri.go:89] found id: ""
	I0630 15:54:01.959108 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.959118 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:01.959126 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:01.959213 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:02.004157 1612198 cri.go:89] found id: ""
	I0630 15:54:02.004193 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.004207 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:02.004216 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:02.004293 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:02.041453 1612198 cri.go:89] found id: ""
	I0630 15:54:02.041488 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.041496 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:02.041503 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:02.041573 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:02.092760 1612198 cri.go:89] found id: ""
	I0630 15:54:02.092801 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.092814 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:02.092824 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:02.092894 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:02.130937 1612198 cri.go:89] found id: ""
	I0630 15:54:02.130976 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.130985 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:02.130996 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:02.131076 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:02.186285 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:02.186333 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:02.203252 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:02.203283 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:02.274788 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:02.274820 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:02.274836 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:02.354791 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:02.354835 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:04.902714 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:04.922560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:04.922631 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:04.961257 1612198 cri.go:89] found id: ""
	I0630 15:54:04.961291 1612198 logs.go:282] 0 containers: []
	W0630 15:54:04.961302 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:04.961312 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:04.961388 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:04.997894 1612198 cri.go:89] found id: ""
	I0630 15:54:04.997927 1612198 logs.go:282] 0 containers: []
	W0630 15:54:04.997936 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:04.997942 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:04.998007 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:05.038875 1612198 cri.go:89] found id: ""
	I0630 15:54:05.038923 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.038936 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:05.038945 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:05.039035 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:05.080082 1612198 cri.go:89] found id: ""
	I0630 15:54:05.080123 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.080135 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:05.080145 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:05.080205 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:05.117322 1612198 cri.go:89] found id: ""
	I0630 15:54:05.117358 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.117371 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:05.117378 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:05.117469 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:05.172542 1612198 cri.go:89] found id: ""
	I0630 15:54:05.172578 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.172589 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:05.172598 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:05.172666 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:05.220246 1612198 cri.go:89] found id: ""
	I0630 15:54:05.220280 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.220291 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:05.220299 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:05.220365 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:05.279486 1612198 cri.go:89] found id: ""
	I0630 15:54:05.279521 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.279533 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:05.279548 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:05.279564 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:05.341677 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:05.341734 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:05.359513 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:05.359566 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:05.445100 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:05.445128 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:05.445144 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:05.552812 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:05.552883 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:08.098433 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:08.115865 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:08.115985 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:08.155035 1612198 cri.go:89] found id: ""
	I0630 15:54:08.155077 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.155092 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:08.155103 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:08.155173 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:08.192666 1612198 cri.go:89] found id: ""
	I0630 15:54:08.192702 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.192711 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:08.192719 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:08.192791 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:08.234681 1612198 cri.go:89] found id: ""
	I0630 15:54:08.234710 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.234718 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:08.234723 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:08.234782 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:08.271666 1612198 cri.go:89] found id: ""
	I0630 15:54:08.271699 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.271707 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:08.271714 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:08.271769 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:08.309335 1612198 cri.go:89] found id: ""
	I0630 15:54:08.309366 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.309375 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:08.309381 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:08.309471 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:08.351248 1612198 cri.go:89] found id: ""
	I0630 15:54:08.351284 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.351296 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:08.351305 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:08.351384 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:08.386803 1612198 cri.go:89] found id: ""
	I0630 15:54:08.386833 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.386843 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:08.386851 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:08.386922 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:08.434407 1612198 cri.go:89] found id: ""
	I0630 15:54:08.434442 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.434451 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:08.434461 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:08.434474 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:08.510981 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:08.511009 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:08.511028 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:08.590361 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:08.590426 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:08.634603 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:08.634636 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:08.687291 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:08.687339 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:11.202732 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:11.228516 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:11.228589 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:11.307836 1612198 cri.go:89] found id: ""
	I0630 15:54:11.307870 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.307882 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:11.307890 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:11.307973 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:11.359347 1612198 cri.go:89] found id: ""
	I0630 15:54:11.359380 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.359400 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:11.359408 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:11.359467 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:11.414423 1612198 cri.go:89] found id: ""
	I0630 15:54:11.414469 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.414479 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:11.414486 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:11.414549 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:11.457669 1612198 cri.go:89] found id: ""
	I0630 15:54:11.457704 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.457722 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:11.457735 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:11.457804 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:11.511061 1612198 cri.go:89] found id: ""
	I0630 15:54:11.511131 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.511147 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:11.511159 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:11.511345 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:11.557886 1612198 cri.go:89] found id: ""
	I0630 15:54:11.557923 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.557936 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:11.557946 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:11.558014 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:11.603894 1612198 cri.go:89] found id: ""
	I0630 15:54:11.603926 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.603938 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:11.603946 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:11.604016 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:11.652115 1612198 cri.go:89] found id: ""
	I0630 15:54:11.652147 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.652156 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:11.652165 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:11.652177 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:11.700550 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:11.700588 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:11.761044 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:11.761088 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:11.779581 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:11.779669 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:11.872983 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:11.873013 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:11.873040 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:14.469180 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:14.488438 1612198 kubeadm.go:593] duration metric: took 4m4.858627578s to restartPrimaryControlPlane
	W0630 15:54:14.488521 1612198 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0630 15:54:14.488557 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:54:16.362367 1612198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.873774715s)
	I0630 15:54:16.362472 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:54:16.381754 1612198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:54:16.394832 1612198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:54:16.407997 1612198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:54:16.408022 1612198 kubeadm.go:157] found existing configuration files:
	
	I0630 15:54:16.408088 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:54:16.420299 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:54:16.420374 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:54:16.432689 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:54:16.450141 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:54:16.450232 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:54:16.466230 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:54:16.478725 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:54:16.478810 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:54:16.491926 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:54:16.503661 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:54:16.503754 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:54:16.516000 1612198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:54:16.604779 1612198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:54:16.604866 1612198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:54:16.771725 1612198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:54:16.771885 1612198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:54:16.772009 1612198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:54:17.000568 1612198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:54:17.002768 1612198 out.go:235]   - Generating certificates and keys ...
	I0630 15:54:17.007633 1612198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:54:17.007744 1612198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:54:17.007835 1612198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:54:17.007906 1612198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:54:17.007987 1612198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:54:17.008050 1612198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:54:17.008130 1612198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:54:17.008216 1612198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:54:17.008304 1612198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:54:17.008429 1612198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:54:17.008479 1612198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:54:17.008545 1612198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:54:17.091062 1612198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:54:17.216540 1612198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:54:17.314609 1612198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:54:17.399588 1612198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:54:17.417749 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:54:17.418852 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:54:17.418923 1612198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:54:17.631341 1612198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:54:17.633197 1612198 out.go:235]   - Booting up control plane ...
	I0630 15:54:17.633340 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:54:17.639557 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:54:17.642269 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:54:17.646155 1612198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:54:17.647610 1612198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:54:57.647972 1612198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:54:57.648456 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:54:57.648704 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:02.649537 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:02.649775 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:12.650265 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:12.650526 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:32.650986 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:32.651250 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:12.652241 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:12.652569 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:12.652621 1612198 kubeadm.go:310] 
	I0630 15:56:12.652681 1612198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:56:12.652741 1612198 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:56:12.652751 1612198 kubeadm.go:310] 
	I0630 15:56:12.652778 1612198 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:56:12.652814 1612198 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:56:12.652960 1612198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:56:12.652983 1612198 kubeadm.go:310] 
	I0630 15:56:12.653129 1612198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:56:12.653192 1612198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:56:12.653257 1612198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:56:12.653270 1612198 kubeadm.go:310] 
	I0630 15:56:12.653457 1612198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:56:12.653585 1612198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:56:12.653603 1612198 kubeadm.go:310] 
	I0630 15:56:12.653767 1612198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:56:12.653893 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:56:12.654008 1612198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:56:12.654137 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:56:12.654157 1612198 kubeadm.go:310] 
	I0630 15:56:12.655912 1612198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:56:12.655994 1612198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:56:12.656047 1612198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0630 15:56:12.656312 1612198 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0630 15:56:12.656390 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:56:13.118145 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:56:13.137252 1612198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:56:13.148791 1612198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:56:13.148814 1612198 kubeadm.go:157] found existing configuration files:
	
	I0630 15:56:13.148866 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:56:13.159734 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:56:13.159815 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:56:13.170810 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:56:13.181716 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:56:13.181794 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:56:13.193772 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:56:13.204825 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:56:13.204895 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:56:13.216418 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:56:13.227545 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:56:13.227620 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:56:13.239663 1612198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:56:13.314550 1612198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:56:13.314640 1612198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:56:13.462367 1612198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:56:13.462550 1612198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:56:13.462695 1612198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:56:13.649387 1612198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:56:13.651840 1612198 out.go:235]   - Generating certificates and keys ...
	I0630 15:56:13.651943 1612198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:56:13.652047 1612198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:56:13.652179 1612198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:56:13.652262 1612198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:56:13.652381 1612198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:56:13.652486 1612198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:56:13.652658 1612198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:56:13.652726 1612198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:56:13.652788 1612198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:56:13.652876 1612198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:56:13.652930 1612198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:56:13.653009 1612198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:56:13.920791 1612198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:56:14.049695 1612198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:56:14.213882 1612198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:56:14.469969 1612198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:56:14.493927 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:56:14.496121 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:56:14.496179 1612198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:56:14.667471 1612198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:56:14.669824 1612198 out.go:235]   - Booting up control plane ...
	I0630 15:56:14.670005 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:56:14.673040 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:56:14.674211 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:56:14.675608 1612198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:56:14.680984 1612198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:56:54.682952 1612198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:56:54.683551 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:54.683769 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:59.684143 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:59.684406 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:57:09.685091 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:57:09.685374 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:57:29.686408 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:57:29.686681 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:58:09.688249 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:58:09.688537 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:58:09.688564 1612198 kubeadm.go:310] 
	I0630 15:58:09.688620 1612198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:58:09.688672 1612198 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:58:09.688681 1612198 kubeadm.go:310] 
	I0630 15:58:09.688721 1612198 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:58:09.688774 1612198 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:58:09.688912 1612198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:58:09.688921 1612198 kubeadm.go:310] 
	I0630 15:58:09.689114 1612198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:58:09.689178 1612198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:58:09.689250 1612198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:58:09.689265 1612198 kubeadm.go:310] 
	I0630 15:58:09.689442 1612198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:58:09.689568 1612198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:58:09.689580 1612198 kubeadm.go:310] 
	I0630 15:58:09.689730 1612198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:58:09.689812 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:58:09.689888 1612198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:58:09.689950 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:58:09.689957 1612198 kubeadm.go:310] 
	I0630 15:58:09.692282 1612198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:58:09.692363 1612198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:58:09.692431 1612198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0630 15:58:09.692497 1612198 kubeadm.go:394] duration metric: took 8m0.118278148s to StartCluster
	I0630 15:58:09.692554 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:58:09.692626 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:58:09.732128 1612198 cri.go:89] found id: ""
	I0630 15:58:09.732169 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.732178 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:58:09.732185 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:58:09.732247 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:58:09.764993 1612198 cri.go:89] found id: ""
	I0630 15:58:09.765024 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.765034 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:58:09.765042 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:58:09.765112 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:58:09.800767 1612198 cri.go:89] found id: ""
	I0630 15:58:09.800809 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.800820 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:58:09.800828 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:58:09.800888 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:58:09.834514 1612198 cri.go:89] found id: ""
	I0630 15:58:09.834544 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.834553 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:58:09.834560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:58:09.834636 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:58:09.867918 1612198 cri.go:89] found id: ""
	I0630 15:58:09.867946 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.867955 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:58:09.867962 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:58:09.868016 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:58:09.908166 1612198 cri.go:89] found id: ""
	I0630 15:58:09.908199 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.908208 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:58:09.908215 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:58:09.908275 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:58:09.941613 1612198 cri.go:89] found id: ""
	I0630 15:58:09.941649 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.941658 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:58:09.941665 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:58:09.941721 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:58:09.983579 1612198 cri.go:89] found id: ""
	I0630 15:58:09.983617 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.983626 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:58:09.983637 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:58:09.983652 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:58:10.041447 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:58:10.041506 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:58:10.055597 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:58:10.055633 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:58:10.125308 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:58:10.125345 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:58:10.125363 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:58:10.231871 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:58:10.231919 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0630 15:58:10.270513 1612198 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0630 15:58:10.270594 1612198 out.go:270] * 
	W0630 15:58:10.270682 1612198 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:58:10.270703 1612198 out.go:270] * 
	W0630 15:58:10.272423 1612198 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0630 15:58:10.276013 1612198 out.go:201] 
	W0630 15:58:10.277283 1612198 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:58:10.277328 1612198 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0630 15:58:10.277358 1612198 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0630 15:58:10.279010 1612198 out.go:201] 
	
	
	==> CRI-O <==
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.418701457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299091418681686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40e77fd5-5542-421e-8b25-24002aa87ab1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.419373239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ae5418f-d952-4ce4-bf86-44cc62e5eb6b name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.419426926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ae5418f-d952-4ce4-bf86-44cc62e5eb6b name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.419460922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4ae5418f-d952-4ce4-bf86-44cc62e5eb6b name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.453258777Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1661ee76-1306-440d-8033-6c4a1af2c1dd name=/runtime.v1.RuntimeService/Version
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.453346712Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1661ee76-1306-440d-8033-6c4a1af2c1dd name=/runtime.v1.RuntimeService/Version
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.454615321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82dccbdc-de92-4213-a613-bc2f0e2c8e22 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.455357238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299091455290818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82dccbdc-de92-4213-a613-bc2f0e2c8e22 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.456370231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7223a83-ed2f-4951-86c2-12d4af927bc7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.456537406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7223a83-ed2f-4951-86c2-12d4af927bc7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.456573464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c7223a83-ed2f-4951-86c2-12d4af927bc7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.490074417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efc9a886-c7da-4dda-ba74-568206ed29ac name=/runtime.v1.RuntimeService/Version
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.490149396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efc9a886-c7da-4dda-ba74-568206ed29ac name=/runtime.v1.RuntimeService/Version
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.491235594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dedf93cf-6739-4b3c-b9fb-eb447aa0e241 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.491596805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299091491578782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dedf93cf-6739-4b3c-b9fb-eb447aa0e241 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.492074205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad83c5ab-a49f-4d25-84bc-7cc43ce58d5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.492120226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad83c5ab-a49f-4d25-84bc-7cc43ce58d5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.492159980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad83c5ab-a49f-4d25-84bc-7cc43ce58d5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.532498533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6bd3616-89b9-40cf-bf33-cd24cbb6cc28 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.532572070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6bd3616-89b9-40cf-bf33-cd24cbb6cc28 name=/runtime.v1.RuntimeService/Version
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.533720649Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa528c9e-a2da-42be-86ca-c769ff4802a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.534193910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299091534171190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa528c9e-a2da-42be-86ca-c769ff4802a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.534734024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9d7f759-08fe-4b46-8dde-2e42d97376a5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.534777392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9d7f759-08fe-4b46-8dde-2e42d97376a5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 15:58:11 old-k8s-version-836310 crio[829]: time="2025-06-30 15:58:11.534810743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f9d7f759-08fe-4b46-8dde-2e42d97376a5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun30 15:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001300] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004063] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.051769] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun30 15:50] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108667] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.116066] kauditd_printk_skb: 46 callbacks suppressed
	[Jun30 15:56] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:58:11 up 8 min,  0 users,  load average: 0.24, 0.12, 0.06
	Linux old-k8s-version-836310 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc00082e2a0, 0x4f04d00, 0xc000970790)
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0009356f0)
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000949ef0, 0x4f0ac20, 0xc000112280, 0x1, 0xc00009e0c0)
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00082e2a0, 0xc00009e0c0)
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bff0c0, 0xc00094c120)
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6754]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jun 30 15:58:09 old-k8s-version-836310 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 30 15:58:09 old-k8s-version-836310 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 30 15:58:09 old-k8s-version-836310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jun 30 15:58:09 old-k8s-version-836310 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6794]: I0630 15:58:09.991085    6794 server.go:416] Version: v1.20.0
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6794]: I0630 15:58:09.991333    6794 server.go:837] Client rotation is on, will bootstrap in background
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6794]: I0630 15:58:09.993240    6794 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6794]: W0630 15:58:09.994116    6794 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 30 15:58:09 old-k8s-version-836310 kubelet[6794]: I0630 15:58:09.994445    6794 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 2 (252.96474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-836310" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (544.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:58:27.759662 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:58:33.880678 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:58:48.740565 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:48.795140 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:48.801639 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:48.813149 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:48.834663 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:48.876219 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:48.957807 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:49.119450 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:58:49.441049 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:50.082675 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:58:51.364614 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:58:53.926712 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:58:59.048348 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:00.017547 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:59:00.024058 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:59:00.035544 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:59:00.057142 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:59:00.098735 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:59:00.180391 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:00.341981 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:59:00.663777 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:59:01.305139 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:02.587417 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:05.149773 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:08.721610 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:59:09.290702 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:10.272085 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:20.514496 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:21.463221 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:29.772337 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:31.749358 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:40.996690 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:48.660503 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:59:49.179985 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 15:59:55.802405 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:00:10.734097 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:00:20.916660 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:00:21.958926 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:00:30.643076 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:01:04.684639 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 16:01:04.878978 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:01:32.582478 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 16:01:32.656100 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:01:34.814882 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:01:43.880940 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:01:47.889247 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:02:11.941512 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:02:15.590905 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:02:27.761050 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:02:39.644411 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:02:46.780406 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:03:14.485243 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:03:23.994084 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:03:48.794847 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:04:00.017581 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:04:16.498072 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:04:21.463488 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:04:27.722975 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:04:48.660098 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:05:20.916569 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:06:04.684382 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 16:06:04.878404 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:06:11.731276 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:06:34.814711 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:06:47.888642 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:07:11.941767 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 2 (243.259943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-836310" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 2 (245.581583ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-836310 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-668101 sudo crio                          | flannel-668101 | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| delete  | -p flannel-668101                                    | flannel-668101 | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo docker                         | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo find                           | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo crio                           | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p bridge-668101                                     | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 15:52:42
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 15:52:42.950710 1620744 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:52:42.950982 1620744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:52:42.950992 1620744 out.go:358] Setting ErrFile to fd 2...
	I0630 15:52:42.950997 1620744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:52:42.951256 1620744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:52:42.951919 1620744 out.go:352] Setting JSON to false
	I0630 15:52:42.953176 1620744 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":34455,"bootTime":1751264308,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:52:42.953303 1620744 start.go:140] virtualization: kvm guest
	I0630 15:52:42.956113 1620744 out.go:177] * [bridge-668101] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:52:42.957699 1620744 notify.go:220] Checking for updates...
	I0630 15:52:42.957717 1620744 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:52:42.959576 1620744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:52:42.961566 1620744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:52:42.963634 1620744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:52:42.965261 1620744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:52:42.966949 1620744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:52:42.968735 1620744 config.go:182] Loaded profile config "enable-default-cni-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:52:42.968869 1620744 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:52:42.968990 1620744 config.go:182] Loaded profile config "old-k8s-version-836310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0630 15:52:42.969114 1620744 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:52:43.011541 1620744 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 15:52:43.013118 1620744 start.go:304] selected driver: kvm2
	I0630 15:52:43.013145 1620744 start.go:908] validating driver "kvm2" against <nil>
	I0630 15:52:43.013160 1620744 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:52:43.014286 1620744 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:52:43.014403 1620744 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:52:43.032217 1620744 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:52:43.032283 1620744 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 15:52:43.032559 1620744 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:52:43.032604 1620744 cni.go:84] Creating CNI manager for "bridge"
	I0630 15:52:43.032615 1620744 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 15:52:43.032686 1620744 start.go:347] cluster config:
	{Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:bridge-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0630 15:52:43.032888 1620744 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:52:43.035138 1620744 out.go:177] * Starting "bridge-668101" primary control-plane node in "bridge-668101" cluster
	I0630 15:52:41.357269 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:41.358093 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find current IP address of domain flannel-668101 in network mk-flannel-668101
	I0630 15:52:41.358123 1619158 main.go:141] libmachine: (flannel-668101) DBG | I0630 15:52:41.358037 1619189 retry.go:31] will retry after 4.215568728s: waiting for domain to come up
	W0630 15:52:44.159824 1617293 pod_ready.go:104] pod "coredns-674b8bbfcf-6rphx" is not "Ready", error: <nil>
	I0630 15:52:44.656114 1617293 pod_ready.go:99] pod "coredns-674b8bbfcf-6rphx" in "kube-system" namespace is gone: getting pod "coredns-674b8bbfcf-6rphx" in "kube-system" namespace (will retry): pods "coredns-674b8bbfcf-6rphx" not found
	I0630 15:52:44.656143 1617293 pod_ready.go:86] duration metric: took 10.003645641s for pod "coredns-674b8bbfcf-6rphx" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.656159 1617293 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-v5d7m" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.660419 1617293 pod_ready.go:94] pod "coredns-674b8bbfcf-v5d7m" is "Ready"
	I0630 15:52:44.660451 1617293 pod_ready.go:86] duration metric: took 4.285712ms for pod "coredns-674b8bbfcf-v5d7m" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.662598 1617293 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.665846 1617293 pod_ready.go:94] pod "etcd-enable-default-cni-668101" is "Ready"
	I0630 15:52:44.665873 1617293 pod_ready.go:86] duration metric: took 3.248201ms for pod "etcd-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.667505 1617293 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.672030 1617293 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-668101" is "Ready"
	I0630 15:52:44.672060 1617293 pod_ready.go:86] duration metric: took 4.533989ms for pod "kube-apiserver-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.673855 1617293 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.057371 1617293 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-668101" is "Ready"
	I0630 15:52:45.057433 1617293 pod_ready.go:86] duration metric: took 383.556453ms for pod "kube-controller-manager-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.257321 1617293 pod_ready.go:83] waiting for pod "kube-proxy-gx8xr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.657721 1617293 pod_ready.go:94] pod "kube-proxy-gx8xr" is "Ready"
	I0630 15:52:45.657765 1617293 pod_ready.go:86] duration metric: took 400.308271ms for pod "kube-proxy-gx8xr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.857507 1617293 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:46.256921 1617293 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-668101" is "Ready"
	I0630 15:52:46.256953 1617293 pod_ready.go:86] duration metric: took 399.409105ms for pod "kube-scheduler-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:46.256970 1617293 pod_ready.go:40] duration metric: took 11.610545265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:52:46.306916 1617293 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:52:46.308982 1617293 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-668101" cluster and "default" namespace by default
	W0630 15:52:42.720632 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:42.720657 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:42.720672 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:42.805318 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:42.805369 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:45.356097 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:45.375177 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:45.375249 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:45.411531 1612198 cri.go:89] found id: ""
	I0630 15:52:45.411573 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.411585 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:45.411594 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:45.411670 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:45.446010 1612198 cri.go:89] found id: ""
	I0630 15:52:45.446040 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.446049 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:45.446055 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:45.446126 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:45.483165 1612198 cri.go:89] found id: ""
	I0630 15:52:45.483213 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.483225 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:45.483234 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:45.483309 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:45.519693 1612198 cri.go:89] found id: ""
	I0630 15:52:45.519724 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.519732 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:45.519739 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:45.519813 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:45.554863 1612198 cri.go:89] found id: ""
	I0630 15:52:45.554902 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.554913 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:45.554921 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:45.555000 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:45.590429 1612198 cri.go:89] found id: ""
	I0630 15:52:45.590460 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.590469 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:45.590476 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:45.590545 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:45.625876 1612198 cri.go:89] found id: ""
	I0630 15:52:45.625914 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.625927 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:45.625935 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:45.626002 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:45.663157 1612198 cri.go:89] found id: ""
	I0630 15:52:45.663188 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.663197 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:45.663210 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:45.663227 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:45.717765 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:45.717817 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:45.731782 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:45.731815 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:45.798057 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:45.798090 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:45.798106 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:45.878867 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:45.878917 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:43.036635 1620744 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:52:43.036694 1620744 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 15:52:43.036707 1620744 cache.go:56] Caching tarball of preloaded images
	I0630 15:52:43.036821 1620744 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:52:43.036837 1620744 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 15:52:43.036964 1620744 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/config.json ...
	I0630 15:52:43.036993 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/config.json: {Name:mke71cd9af919bb85465b3e686b56c4cd0e1c7fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:52:43.037185 1620744 start.go:360] acquireMachinesLock for bridge-668101: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:52:45.576190 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:45.576849 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find current IP address of domain flannel-668101 in network mk-flannel-668101
	I0630 15:52:45.576874 1619158 main.go:141] libmachine: (flannel-668101) DBG | I0630 15:52:45.576802 1619189 retry.go:31] will retry after 5.00816622s: waiting for domain to come up
	I0630 15:52:48.422047 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:48.441634 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:48.441712 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:48.482676 1612198 cri.go:89] found id: ""
	I0630 15:52:48.482706 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.482714 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:48.482721 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:48.482781 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:48.523604 1612198 cri.go:89] found id: ""
	I0630 15:52:48.523645 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.523659 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:48.523669 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:48.523740 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:48.566545 1612198 cri.go:89] found id: ""
	I0630 15:52:48.566576 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.566588 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:48.566595 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:48.566667 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:48.602166 1612198 cri.go:89] found id: ""
	I0630 15:52:48.602204 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.602219 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:48.602228 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:48.602296 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:48.645664 1612198 cri.go:89] found id: ""
	I0630 15:52:48.645701 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.645712 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:48.645724 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:48.645796 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:48.689364 1612198 cri.go:89] found id: ""
	I0630 15:52:48.689437 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.689449 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:48.689457 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:48.689532 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:48.727484 1612198 cri.go:89] found id: ""
	I0630 15:52:48.727594 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.727614 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:48.727623 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:48.727695 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:48.765617 1612198 cri.go:89] found id: ""
	I0630 15:52:48.765649 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.765662 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:48.765676 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:48.765696 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:48.832480 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:48.832525 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:48.851001 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:48.851033 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:48.935090 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:48.935117 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:48.935139 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:49.020511 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:49.020556 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:51.569582 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:51.586531 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:51.586608 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:51.623986 1612198 cri.go:89] found id: ""
	I0630 15:52:51.624022 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.624034 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:51.624041 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:51.624097 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:51.660234 1612198 cri.go:89] found id: ""
	I0630 15:52:51.660289 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.660311 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:51.660321 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:51.660396 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:51.694392 1612198 cri.go:89] found id: ""
	I0630 15:52:51.694421 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.694431 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:51.694439 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:51.694509 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:51.733636 1612198 cri.go:89] found id: ""
	I0630 15:52:51.733679 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.733692 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:51.733700 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:51.733767 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:51.770073 1612198 cri.go:89] found id: ""
	I0630 15:52:51.770105 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.770116 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:51.770125 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:51.770193 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:51.806054 1612198 cri.go:89] found id: ""
	I0630 15:52:51.806082 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.806096 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:51.806105 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:51.806166 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:51.844220 1612198 cri.go:89] found id: ""
	I0630 15:52:51.844253 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.844263 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:51.844270 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:51.844337 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:51.879139 1612198 cri.go:89] found id: ""
	I0630 15:52:51.879180 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.879192 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:51.879206 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:51.879225 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:51.959131 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:51.959178 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:51.999852 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:51.999898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:52.054538 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:52.054586 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:52.068544 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:52.068582 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:52.141184 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:50.586392 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:50.586877 1619158 main.go:141] libmachine: (flannel-668101) found domain IP: 192.168.50.164
	I0630 15:52:50.586929 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has current primary IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:50.586951 1619158 main.go:141] libmachine: (flannel-668101) reserving static IP address...
	I0630 15:52:50.587266 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find host DHCP lease matching {name: "flannel-668101", mac: "52:54:00:d0:56:26", ip: "192.168.50.164"} in network mk-flannel-668101
	I0630 15:52:50.692673 1619158 main.go:141] libmachine: (flannel-668101) DBG | Getting to WaitForSSH function...
	I0630 15:52:50.692714 1619158 main.go:141] libmachine: (flannel-668101) reserved static IP address 192.168.50.164 for domain flannel-668101
	I0630 15:52:50.692729 1619158 main.go:141] libmachine: (flannel-668101) waiting for SSH...
	I0630 15:52:50.695660 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:50.696050 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101
	I0630 15:52:50.696074 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find defined IP address of network mk-flannel-668101 interface with MAC address 52:54:00:d0:56:26
	I0630 15:52:50.696281 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH client type: external
	I0630 15:52:50.696306 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa (-rw-------)
	I0630 15:52:50.696335 1619158 main.go:141] libmachine: (flannel-668101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:52:50.696364 1619158 main.go:141] libmachine: (flannel-668101) DBG | About to run SSH command:
	I0630 15:52:50.696404 1619158 main.go:141] libmachine: (flannel-668101) DBG | exit 0
	I0630 15:52:50.701524 1619158 main.go:141] libmachine: (flannel-668101) DBG | SSH cmd err, output: exit status 255: 
	I0630 15:52:50.701550 1619158 main.go:141] libmachine: (flannel-668101) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0630 15:52:50.701561 1619158 main.go:141] libmachine: (flannel-668101) DBG | command : exit 0
	I0630 15:52:50.701568 1619158 main.go:141] libmachine: (flannel-668101) DBG | err     : exit status 255
	I0630 15:52:50.701579 1619158 main.go:141] libmachine: (flannel-668101) DBG | output  : 
	I0630 15:52:53.701789 1619158 main.go:141] libmachine: (flannel-668101) DBG | Getting to WaitForSSH function...
	I0630 15:52:53.704360 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.704932 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:53.704962 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.705130 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH client type: external
	I0630 15:52:53.705161 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa (-rw-------)
	I0630 15:52:53.705186 1619158 main.go:141] libmachine: (flannel-668101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:52:53.705196 1619158 main.go:141] libmachine: (flannel-668101) DBG | About to run SSH command:
	I0630 15:52:53.705216 1619158 main.go:141] libmachine: (flannel-668101) DBG | exit 0
	I0630 15:52:53.830137 1619158 main.go:141] libmachine: (flannel-668101) DBG | SSH cmd err, output: <nil>: 
	I0630 15:52:53.830489 1619158 main.go:141] libmachine: (flannel-668101) KVM machine creation complete
	I0630 15:52:53.831158 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetConfigRaw
	I0630 15:52:53.831811 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:53.832305 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:53.832539 1619158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:52:53.832558 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:52:53.834243 1619158 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:52:53.834258 1619158 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:52:53.834264 1619158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:52:53.834269 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:53.837692 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.838098 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:53.838132 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.838367 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:53.838567 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.838712 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.838827 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:53.838973 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:53.839228 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:53.839240 1619158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:52:53.941129 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:52:53.941166 1619158 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:52:53.941179 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:53.945852 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.946724 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:53.946789 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.947156 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:53.947488 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.947724 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.947876 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:53.948105 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:53.948402 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:53.948418 1619158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:52:54.054669 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:52:54.054748 1619158 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:52:54.054758 1619158 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:52:54.054767 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetMachineName
	I0630 15:52:54.055102 1619158 buildroot.go:166] provisioning hostname "flannel-668101"
	I0630 15:52:54.055132 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetMachineName
	I0630 15:52:54.055454 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:54.059064 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.059471 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.059502 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.059708 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:54.059899 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.060070 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.060224 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:54.060393 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:54.060624 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:54.060640 1619158 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-668101 && echo "flannel-668101" | sudo tee /etc/hostname
	I0630 15:52:54.177979 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-668101
	
	I0630 15:52:54.178018 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:54.181025 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.181363 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.181395 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.181596 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:54.181838 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.182126 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.182320 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:54.182493 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:54.182708 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:54.182725 1619158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-668101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-668101/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-668101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:52:54.297007 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:52:54.297044 1619158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:52:54.297108 1619158 buildroot.go:174] setting up certificates
	I0630 15:52:54.297155 1619158 provision.go:84] configureAuth start
	I0630 15:52:54.297174 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetMachineName
	I0630 15:52:54.297629 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:54.300624 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.300972 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.301001 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.301156 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:54.303586 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.303998 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.304030 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.304173 1619158 provision.go:143] copyHostCerts
	I0630 15:52:54.304256 1619158 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:52:54.304278 1619158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:52:54.304353 1619158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:52:54.304508 1619158 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:52:54.304518 1619158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:52:54.304545 1619158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:52:54.304611 1619158 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:52:54.304618 1619158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:52:54.304640 1619158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:52:54.304715 1619158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.flannel-668101 san=[127.0.0.1 192.168.50.164 flannel-668101 localhost minikube]
	I0630 15:52:55.093359 1619158 provision.go:177] copyRemoteCerts
	I0630 15:52:55.093451 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:52:55.093490 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.096608 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.097063 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.097100 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.097382 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.097605 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.097804 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.097967 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.181657 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:52:55.212265 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0630 15:52:55.244844 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:52:55.279323 1619158 provision.go:87] duration metric: took 982.144024ms to configureAuth
	I0630 15:52:55.279365 1619158 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:52:55.279616 1619158 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:52:55.279709 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.283643 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.284181 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.284211 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.284404 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.284627 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.284847 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.285000 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.285212 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:55.285583 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:55.285612 1619158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:52:55.778589 1620744 start.go:364] duration metric: took 12.741358919s to acquireMachinesLock for "bridge-668101"
	I0630 15:52:55.778680 1620744 start.go:93] Provisioning new machine with config: &{Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:bridge-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:52:55.778835 1620744 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 15:52:55.530045 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:52:55.530104 1619158 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:52:55.530116 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetURL
	I0630 15:52:55.531952 1619158 main.go:141] libmachine: (flannel-668101) DBG | using libvirt version 6000000
	I0630 15:52:55.534427 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.534823 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.534843 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.535146 1619158 main.go:141] libmachine: Docker is up and running!
	I0630 15:52:55.535159 1619158 main.go:141] libmachine: Reticulating splines...
	I0630 15:52:55.535167 1619158 client.go:171] duration metric: took 30.008807578s to LocalClient.Create
	I0630 15:52:55.535196 1619158 start.go:167] duration metric: took 30.008887821s to libmachine.API.Create "flannel-668101"
	I0630 15:52:55.535211 1619158 start.go:293] postStartSetup for "flannel-668101" (driver="kvm2")
	I0630 15:52:55.535279 1619158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:52:55.535323 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.535615 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:52:55.535648 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.538056 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.538461 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.538505 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.538621 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.538865 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.539071 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.539281 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.621263 1619158 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:52:55.626036 1619158 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:52:55.626073 1619158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:52:55.626186 1619158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:52:55.626347 1619158 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:52:55.626445 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:52:55.637649 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:52:55.667310 1619158 start.go:296] duration metric: took 132.08213ms for postStartSetup
	I0630 15:52:55.667372 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetConfigRaw
	I0630 15:52:55.668073 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:55.671293 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.671868 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.671903 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.672201 1619158 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/config.json ...
	I0630 15:52:55.672423 1619158 start.go:128] duration metric: took 30.167785685s to createHost
	I0630 15:52:55.672451 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.674800 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.675142 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.675174 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.675451 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.675643 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.675788 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.676031 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.676253 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:55.676551 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:55.676567 1619158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:52:55.778402 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298775.758912603
	
	I0630 15:52:55.778427 1619158 fix.go:216] guest clock: 1751298775.758912603
	I0630 15:52:55.778435 1619158 fix.go:229] Guest: 2025-06-30 15:52:55.758912603 +0000 UTC Remote: 2025-06-30 15:52:55.67243923 +0000 UTC m=+30.329704815 (delta=86.473373ms)
	I0630 15:52:55.778459 1619158 fix.go:200] guest clock delta is within tolerance: 86.473373ms
	I0630 15:52:55.778466 1619158 start.go:83] releasing machines lock for "flannel-668101", held for 30.273912922s
	I0630 15:52:55.778518 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.778846 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:55.782021 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.782499 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.782533 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.782737 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.783225 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.783481 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.783595 1619158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:52:55.783641 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.783703 1619158 ssh_runner.go:195] Run: cat /version.json
	I0630 15:52:55.783731 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.786539 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.786668 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.786964 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.786995 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.787022 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.787034 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.787195 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.787318 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.787429 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.787516 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.787627 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.787712 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.787790 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.787848 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.874997 1619158 ssh_runner.go:195] Run: systemctl --version
	I0630 15:52:55.904909 1619158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:52:56.070066 1619158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:52:56.076773 1619158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:52:56.076855 1619158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:52:56.096159 1619158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:52:56.096192 1619158 start.go:495] detecting cgroup driver to use...
	I0630 15:52:56.096267 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:52:56.116203 1619158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:52:56.134008 1619158 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:52:56.134070 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:52:56.150561 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:52:56.166862 1619158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:52:56.306622 1619158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:52:56.473344 1619158 docker.go:246] disabling docker service ...
	I0630 15:52:56.473467 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:52:56.490252 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:52:56.505665 1619158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:52:56.705455 1619158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:52:56.856676 1619158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:52:56.873735 1619158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:52:56.897728 1619158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:52:56.897807 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.909980 1619158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:52:56.910087 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.921206 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.932511 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.943614 1619158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:52:56.956362 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.968071 1619158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.987887 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.999240 1619158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:52:57.009535 1619158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:52:57.009612 1619158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:52:57.024825 1619158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:52:57.035690 1619158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:52:57.175638 1619158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:52:57.278362 1619158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:52:57.278504 1619158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:52:57.285443 1619158 start.go:563] Will wait 60s for crictl version
	I0630 15:52:57.285511 1619158 ssh_runner.go:195] Run: which crictl
	I0630 15:52:57.289297 1619158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:52:57.341170 1619158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:52:57.341278 1619158 ssh_runner.go:195] Run: crio --version
	I0630 15:52:57.370996 1619158 ssh_runner.go:195] Run: crio --version
	I0630 15:52:57.408719 1619158 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:52:54.642061 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:54.657561 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:54.657631 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:54.699127 1612198 cri.go:89] found id: ""
	I0630 15:52:54.699156 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.699165 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:54.699172 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:54.699249 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:54.743537 1612198 cri.go:89] found id: ""
	I0630 15:52:54.743582 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.743595 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:54.743604 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:54.743691 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:54.793655 1612198 cri.go:89] found id: ""
	I0630 15:52:54.793692 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.793705 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:54.793714 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:54.793789 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:54.836404 1612198 cri.go:89] found id: ""
	I0630 15:52:54.836439 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.836450 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:54.836458 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:54.836530 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:54.881834 1612198 cri.go:89] found id: ""
	I0630 15:52:54.881866 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.881874 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:54.881881 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:54.881945 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:54.920907 1612198 cri.go:89] found id: ""
	I0630 15:52:54.920937 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.920945 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:54.920952 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:54.921019 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:54.964724 1612198 cri.go:89] found id: ""
	I0630 15:52:54.964777 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.964790 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:54.964799 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:54.964877 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:55.000611 1612198 cri.go:89] found id: ""
	I0630 15:52:55.000646 1612198 logs.go:282] 0 containers: []
	W0630 15:52:55.000654 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:55.000665 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:55.000678 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:55.075252 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:55.075285 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:55.075306 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:55.162081 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:55.162133 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:55.226240 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:55.226277 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:55.297365 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:55.297429 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:55.781091 1620744 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0630 15:52:55.781346 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:52:55.781446 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:52:55.799943 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38375
	I0630 15:52:55.800489 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:52:55.801103 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:52:55.801134 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:52:55.801483 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:52:55.801678 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:52:55.801826 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:52:55.802012 1620744 start.go:159] libmachine.API.Create for "bridge-668101" (driver="kvm2")
	I0630 15:52:55.802045 1620744 client.go:168] LocalClient.Create starting
	I0630 15:52:55.802082 1620744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 15:52:55.802123 1620744 main.go:141] libmachine: Decoding PEM data...
	I0630 15:52:55.802145 1620744 main.go:141] libmachine: Parsing certificate...
	I0630 15:52:55.802228 1620744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 15:52:55.802259 1620744 main.go:141] libmachine: Decoding PEM data...
	I0630 15:52:55.802275 1620744 main.go:141] libmachine: Parsing certificate...
	I0630 15:52:55.802328 1620744 main.go:141] libmachine: Running pre-create checks...
	I0630 15:52:55.802341 1620744 main.go:141] libmachine: (bridge-668101) Calling .PreCreateCheck
	I0630 15:52:55.802728 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetConfigRaw
	I0630 15:52:55.803114 1620744 main.go:141] libmachine: Creating machine...
	I0630 15:52:55.803131 1620744 main.go:141] libmachine: (bridge-668101) Calling .Create
	I0630 15:52:55.803562 1620744 main.go:141] libmachine: (bridge-668101) creating KVM machine...
	I0630 15:52:55.803587 1620744 main.go:141] libmachine: (bridge-668101) creating network...
	I0630 15:52:55.805278 1620744 main.go:141] libmachine: (bridge-668101) DBG | found existing default KVM network
	I0630 15:52:55.806568 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.806371 1620899 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2c:4b:58} reservation:<nil>}
	I0630 15:52:55.807384 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.807300 1620899 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:46:29:de} reservation:<nil>}
	I0630 15:52:55.808183 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.808055 1620899 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:da:d8:99} reservation:<nil>}
	I0630 15:52:55.809357 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.809236 1620899 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002cac60}
	I0630 15:52:55.809380 1620744 main.go:141] libmachine: (bridge-668101) DBG | created network xml: 
	I0630 15:52:55.809386 1620744 main.go:141] libmachine: (bridge-668101) DBG | <network>
	I0630 15:52:55.809392 1620744 main.go:141] libmachine: (bridge-668101) DBG |   <name>mk-bridge-668101</name>
	I0630 15:52:55.809397 1620744 main.go:141] libmachine: (bridge-668101) DBG |   <dns enable='no'/>
	I0630 15:52:55.809425 1620744 main.go:141] libmachine: (bridge-668101) DBG |   
	I0630 15:52:55.809435 1620744 main.go:141] libmachine: (bridge-668101) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0630 15:52:55.809443 1620744 main.go:141] libmachine: (bridge-668101) DBG |     <dhcp>
	I0630 15:52:55.809449 1620744 main.go:141] libmachine: (bridge-668101) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0630 15:52:55.809456 1620744 main.go:141] libmachine: (bridge-668101) DBG |     </dhcp>
	I0630 15:52:55.809476 1620744 main.go:141] libmachine: (bridge-668101) DBG |   </ip>
	I0630 15:52:55.809495 1620744 main.go:141] libmachine: (bridge-668101) DBG |   
	I0630 15:52:55.809501 1620744 main.go:141] libmachine: (bridge-668101) DBG | </network>
	I0630 15:52:55.809510 1620744 main.go:141] libmachine: (bridge-668101) DBG | 
	I0630 15:52:55.815963 1620744 main.go:141] libmachine: (bridge-668101) DBG | trying to create private KVM network mk-bridge-668101 192.168.72.0/24...
	I0630 15:52:55.898159 1620744 main.go:141] libmachine: (bridge-668101) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101 ...
	I0630 15:52:55.898202 1620744 main.go:141] libmachine: (bridge-668101) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 15:52:55.898214 1620744 main.go:141] libmachine: (bridge-668101) DBG | private KVM network mk-bridge-668101 192.168.72.0/24 created
	I0630 15:52:55.898234 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.898059 1620899 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:52:55.898373 1620744 main.go:141] libmachine: (bridge-668101) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 15:52:56.221476 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:56.221233 1620899 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa...
	I0630 15:52:56.640944 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:56.640745 1620899 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/bridge-668101.rawdisk...
	I0630 15:52:56.640998 1620744 main.go:141] libmachine: (bridge-668101) DBG | Writing magic tar header
	I0630 15:52:56.641019 1620744 main.go:141] libmachine: (bridge-668101) DBG | Writing SSH key tar header
	I0630 15:52:56.641031 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:56.640908 1620899 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101 ...
	I0630 15:52:56.641054 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101
	I0630 15:52:56.641093 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101 (perms=drwx------)
	I0630 15:52:56.641214 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 15:52:56.641244 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:52:56.641260 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 15:52:56.641272 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 15:52:56.641286 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 15:52:56.641298 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins
	I0630 15:52:56.641308 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home
	I0630 15:52:56.641320 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 15:52:56.641331 1620744 main.go:141] libmachine: (bridge-668101) DBG | skipping /home - not owner
	I0630 15:52:56.641357 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 15:52:56.641377 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 15:52:56.641386 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 15:52:56.641397 1620744 main.go:141] libmachine: (bridge-668101) creating domain...
	I0630 15:52:56.642571 1620744 main.go:141] libmachine: (bridge-668101) define libvirt domain using xml: 
	I0630 15:52:56.642602 1620744 main.go:141] libmachine: (bridge-668101) <domain type='kvm'>
	I0630 15:52:56.642633 1620744 main.go:141] libmachine: (bridge-668101)   <name>bridge-668101</name>
	I0630 15:52:56.642652 1620744 main.go:141] libmachine: (bridge-668101)   <memory unit='MiB'>3072</memory>
	I0630 15:52:56.642667 1620744 main.go:141] libmachine: (bridge-668101)   <vcpu>2</vcpu>
	I0630 15:52:56.642691 1620744 main.go:141] libmachine: (bridge-668101)   <features>
	I0630 15:52:56.642705 1620744 main.go:141] libmachine: (bridge-668101)     <acpi/>
	I0630 15:52:56.642713 1620744 main.go:141] libmachine: (bridge-668101)     <apic/>
	I0630 15:52:56.642725 1620744 main.go:141] libmachine: (bridge-668101)     <pae/>
	I0630 15:52:56.642745 1620744 main.go:141] libmachine: (bridge-668101)     
	I0630 15:52:56.642783 1620744 main.go:141] libmachine: (bridge-668101)   </features>
	I0630 15:52:56.642806 1620744 main.go:141] libmachine: (bridge-668101)   <cpu mode='host-passthrough'>
	I0630 15:52:56.642838 1620744 main.go:141] libmachine: (bridge-668101)   
	I0630 15:52:56.642863 1620744 main.go:141] libmachine: (bridge-668101)   </cpu>
	I0630 15:52:56.642880 1620744 main.go:141] libmachine: (bridge-668101)   <os>
	I0630 15:52:56.642900 1620744 main.go:141] libmachine: (bridge-668101)     <type>hvm</type>
	I0630 15:52:56.642914 1620744 main.go:141] libmachine: (bridge-668101)     <boot dev='cdrom'/>
	I0630 15:52:56.642925 1620744 main.go:141] libmachine: (bridge-668101)     <boot dev='hd'/>
	I0630 15:52:56.642944 1620744 main.go:141] libmachine: (bridge-668101)     <bootmenu enable='no'/>
	I0630 15:52:56.642956 1620744 main.go:141] libmachine: (bridge-668101)   </os>
	I0630 15:52:56.642969 1620744 main.go:141] libmachine: (bridge-668101)   <devices>
	I0630 15:52:56.642980 1620744 main.go:141] libmachine: (bridge-668101)     <disk type='file' device='cdrom'>
	I0630 15:52:56.642999 1620744 main.go:141] libmachine: (bridge-668101)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/boot2docker.iso'/>
	I0630 15:52:56.643011 1620744 main.go:141] libmachine: (bridge-668101)       <target dev='hdc' bus='scsi'/>
	I0630 15:52:56.643025 1620744 main.go:141] libmachine: (bridge-668101)       <readonly/>
	I0630 15:52:56.643041 1620744 main.go:141] libmachine: (bridge-668101)     </disk>
	I0630 15:52:56.643059 1620744 main.go:141] libmachine: (bridge-668101)     <disk type='file' device='disk'>
	I0630 15:52:56.643073 1620744 main.go:141] libmachine: (bridge-668101)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 15:52:56.643102 1620744 main.go:141] libmachine: (bridge-668101)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/bridge-668101.rawdisk'/>
	I0630 15:52:56.643114 1620744 main.go:141] libmachine: (bridge-668101)       <target dev='hda' bus='virtio'/>
	I0630 15:52:56.643122 1620744 main.go:141] libmachine: (bridge-668101)     </disk>
	I0630 15:52:56.643135 1620744 main.go:141] libmachine: (bridge-668101)     <interface type='network'>
	I0630 15:52:56.643147 1620744 main.go:141] libmachine: (bridge-668101)       <source network='mk-bridge-668101'/>
	I0630 15:52:56.643170 1620744 main.go:141] libmachine: (bridge-668101)       <model type='virtio'/>
	I0630 15:52:56.643189 1620744 main.go:141] libmachine: (bridge-668101)     </interface>
	I0630 15:52:56.643202 1620744 main.go:141] libmachine: (bridge-668101)     <interface type='network'>
	I0630 15:52:56.643213 1620744 main.go:141] libmachine: (bridge-668101)       <source network='default'/>
	I0630 15:52:56.643225 1620744 main.go:141] libmachine: (bridge-668101)       <model type='virtio'/>
	I0630 15:52:56.643235 1620744 main.go:141] libmachine: (bridge-668101)     </interface>
	I0630 15:52:56.643244 1620744 main.go:141] libmachine: (bridge-668101)     <serial type='pty'>
	I0630 15:52:56.643254 1620744 main.go:141] libmachine: (bridge-668101)       <target port='0'/>
	I0630 15:52:56.643269 1620744 main.go:141] libmachine: (bridge-668101)     </serial>
	I0630 15:52:56.643284 1620744 main.go:141] libmachine: (bridge-668101)     <console type='pty'>
	I0630 15:52:56.643297 1620744 main.go:141] libmachine: (bridge-668101)       <target type='serial' port='0'/>
	I0630 15:52:56.643307 1620744 main.go:141] libmachine: (bridge-668101)     </console>
	I0630 15:52:56.643318 1620744 main.go:141] libmachine: (bridge-668101)     <rng model='virtio'>
	I0630 15:52:56.643330 1620744 main.go:141] libmachine: (bridge-668101)       <backend model='random'>/dev/random</backend>
	I0630 15:52:56.643341 1620744 main.go:141] libmachine: (bridge-668101)     </rng>
	I0630 15:52:56.643348 1620744 main.go:141] libmachine: (bridge-668101)     
	I0630 15:52:56.643370 1620744 main.go:141] libmachine: (bridge-668101)     
	I0630 15:52:56.643393 1620744 main.go:141] libmachine: (bridge-668101)   </devices>
	I0630 15:52:56.643405 1620744 main.go:141] libmachine: (bridge-668101) </domain>
	I0630 15:52:56.643415 1620744 main.go:141] libmachine: (bridge-668101) 
	I0630 15:52:56.648384 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:c9:a1:4d in network default
	I0630 15:52:56.649121 1620744 main.go:141] libmachine: (bridge-668101) starting domain...
	I0630 15:52:56.649143 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:56.649148 1620744 main.go:141] libmachine: (bridge-668101) ensuring networks are active...
	I0630 15:52:56.649950 1620744 main.go:141] libmachine: (bridge-668101) Ensuring network default is active
	I0630 15:52:56.650256 1620744 main.go:141] libmachine: (bridge-668101) Ensuring network mk-bridge-668101 is active
	I0630 15:52:56.650853 1620744 main.go:141] libmachine: (bridge-668101) getting domain XML...
	I0630 15:52:56.651713 1620744 main.go:141] libmachine: (bridge-668101) creating domain...
	I0630 15:52:57.410163 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:57.414146 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:57.414618 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:57.414653 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:57.414941 1619158 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0630 15:52:57.419663 1619158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:52:57.434011 1619158 kubeadm.go:875] updating cluster {Name:flannel-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:flannel-6
68101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:52:57.434146 1619158 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:52:57.434191 1619158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:52:57.470291 1619158 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 15:52:57.470364 1619158 ssh_runner.go:195] Run: which lz4
	I0630 15:52:57.475237 1619158 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:52:57.480568 1619158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:52:57.480607 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 15:52:59.283095 1619158 crio.go:462] duration metric: took 1.807899896s to copy over tarball
	I0630 15:52:59.283202 1619158 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:52:57.821154 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:57.853607 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:57.853696 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:57.914164 1612198 cri.go:89] found id: ""
	I0630 15:52:57.914210 1612198 logs.go:282] 0 containers: []
	W0630 15:52:57.914227 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:57.914246 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:57.914347 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:57.987318 1612198 cri.go:89] found id: ""
	I0630 15:52:57.987351 1612198 logs.go:282] 0 containers: []
	W0630 15:52:57.987366 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:57.987377 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:57.987457 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:58.079419 1612198 cri.go:89] found id: ""
	I0630 15:52:58.079447 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.079455 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:58.079462 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:58.079527 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:58.159322 1612198 cri.go:89] found id: ""
	I0630 15:52:58.159364 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.159376 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:58.159385 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:58.159456 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:58.214549 1612198 cri.go:89] found id: ""
	I0630 15:52:58.214589 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.214605 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:58.214614 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:58.214688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:58.268709 1612198 cri.go:89] found id: ""
	I0630 15:52:58.268743 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.268755 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:58.268764 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:58.268865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:58.336282 1612198 cri.go:89] found id: ""
	I0630 15:52:58.336316 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.336327 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:58.336335 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:58.336411 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:58.385539 1612198 cri.go:89] found id: ""
	I0630 15:52:58.385568 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.385577 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:58.385587 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:58.385600 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:58.490925 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:58.490953 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:58.490966 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:58.595534 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:58.595636 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:58.670912 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:58.670947 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:58.746686 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:58.746777 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:01.264137 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:01.286226 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:01.286330 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:01.365280 1612198 cri.go:89] found id: ""
	I0630 15:53:01.365314 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.365328 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:01.365336 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:01.365446 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:01.416551 1612198 cri.go:89] found id: ""
	I0630 15:53:01.416609 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.416628 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:01.416639 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:01.416760 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:01.466901 1612198 cri.go:89] found id: ""
	I0630 15:53:01.466951 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.466968 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:01.466992 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:01.467076 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:01.515958 1612198 cri.go:89] found id: ""
	I0630 15:53:01.516004 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.516018 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:01.516026 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:01.516100 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:01.556162 1612198 cri.go:89] found id: ""
	I0630 15:53:01.556199 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.556212 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:01.556220 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:01.556294 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:01.596633 1612198 cri.go:89] found id: ""
	I0630 15:53:01.596668 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.596681 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:01.596701 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:01.596767 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:01.643515 1612198 cri.go:89] found id: ""
	I0630 15:53:01.643544 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.643553 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:01.643560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:01.643623 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:01.688673 1612198 cri.go:89] found id: ""
	I0630 15:53:01.688716 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.688730 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:01.688746 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:01.688763 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:01.732854 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:01.732887 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:01.792838 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:01.792898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:01.809743 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:01.809803 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:01.893975 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:01.894006 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:01.894020 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:58.300955 1620744 main.go:141] libmachine: (bridge-668101) waiting for IP...
	I0630 15:52:58.302501 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:58.303671 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:58.303696 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:58.303566 1620899 retry.go:31] will retry after 218.695917ms: waiting for domain to come up
	I0630 15:52:58.524255 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:58.525158 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:58.525190 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:58.525070 1620899 retry.go:31] will retry after 355.788445ms: waiting for domain to come up
	I0630 15:52:58.882797 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:58.883330 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:58.883352 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:58.883258 1620899 retry.go:31] will retry after 433.916696ms: waiting for domain to come up
	I0630 15:52:59.319443 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:59.320277 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:59.320312 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:59.320255 1620899 retry.go:31] will retry after 591.607748ms: waiting for domain to come up
	I0630 15:52:59.914140 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:59.914771 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:59.914833 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:59.914762 1620899 retry.go:31] will retry after 653.936151ms: waiting for domain to come up
	I0630 15:53:00.571061 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:00.571855 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:00.571885 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:00.571800 1620899 retry.go:31] will retry after 843.188018ms: waiting for domain to come up
	I0630 15:53:01.416477 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:01.417384 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:01.417447 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:01.417320 1620899 retry.go:31] will retry after 766.048685ms: waiting for domain to come up
	I0630 15:53:02.185256 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:02.185660 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:02.185690 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:02.185641 1620899 retry.go:31] will retry after 1.410798952s: waiting for domain to come up
	I0630 15:53:01.524921 1619158 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241677784s)
	I0630 15:53:01.524971 1619158 crio.go:469] duration metric: took 2.241824009s to extract the tarball
	I0630 15:53:01.524981 1619158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:53:01.580282 1619158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:53:01.626979 1619158 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:53:01.627012 1619158 cache_images.go:84] Images are preloaded, skipping loading
	I0630 15:53:01.627022 1619158 kubeadm.go:926] updating node { 192.168.50.164 8443 v1.33.2 crio true true} ...
	I0630 15:53:01.627165 1619158 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-668101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:flannel-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0630 15:53:01.627252 1619158 ssh_runner.go:195] Run: crio config
	I0630 15:53:01.702008 1619158 cni.go:84] Creating CNI manager for "flannel"
	I0630 15:53:01.702063 1619158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:53:01.702098 1619158 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-668101 NodeName:flannel-668101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:53:01.702303 1619158 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-668101"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.164"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:53:01.702411 1619158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 15:53:01.715795 1619158 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:53:01.715889 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:53:01.729847 1619158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0630 15:53:01.752217 1619158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:53:01.775084 1619158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0630 15:53:01.796311 1619158 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I0630 15:53:01.801900 1619158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:53:01.819789 1619158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:01.986382 1619158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:02.019955 1619158 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101 for IP: 192.168.50.164
	I0630 15:53:02.019984 1619158 certs.go:194] generating shared ca certs ...
	I0630 15:53:02.020008 1619158 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.020252 1619158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:53:02.020336 1619158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:53:02.020356 1619158 certs.go:256] generating profile certs ...
	I0630 15:53:02.020447 1619158 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.key
	I0630 15:53:02.020471 1619158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt with IP's: []
	I0630 15:53:02.580979 1619158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt ...
	I0630 15:53:02.581014 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: {Name:mk57dc79d0a2f5ced3dc3dbf5df60db658cd128d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.581193 1619158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.key ...
	I0630 15:53:02.581204 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.key: {Name:mkc12787b7a2e7f85b5efc0fe2ad3bd4bb3a36c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.581279 1619158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315
	I0630 15:53:02.581294 1619158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.164]
	I0630 15:53:02.891830 1619158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315 ...
	I0630 15:53:02.891864 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315: {Name:mk4a3b251c65c4f6336605ebde0fd2b6394224cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.892035 1619158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315 ...
	I0630 15:53:02.892047 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315: {Name:mkfdd1175258bc2f41de0b5ea2ff2aa4d2ba1824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.892138 1619158 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt
	I0630 15:53:02.892212 1619158 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key
	I0630 15:53:02.892263 1619158 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key
	I0630 15:53:02.892288 1619158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt with IP's: []
	I0630 15:53:03.110294 1619158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt ...
	I0630 15:53:03.110338 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt: {Name:mk5f2a1c5ffd32a7751cdaa24de023db01340134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:03.110558 1619158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key ...
	I0630 15:53:03.110576 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key: {Name:mk75d7060f89bcef318a4de6ba9f3f077d54a76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:03.110779 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:53:03.110831 1619158 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:53:03.110847 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:53:03.110885 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:53:03.110918 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:53:03.110952 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:53:03.111006 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:53:03.111669 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:53:03.143651 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:53:03.173382 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:53:03.207609 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:53:03.239807 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 15:53:03.271613 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 15:53:03.304865 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:53:03.336277 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 15:53:03.367070 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:53:03.399740 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:53:03.431108 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:53:03.469922 1619158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:53:03.496991 1619158 ssh_runner.go:195] Run: openssl version
	I0630 15:53:03.503713 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:53:03.519935 1619158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:53:03.525171 1619158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:53:03.525235 1619158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:53:03.533074 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:53:03.546306 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:53:03.560844 1619158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:03.566199 1619158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:03.566277 1619158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:03.573685 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:53:03.589057 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:53:03.614844 1619158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:53:03.621765 1619158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:53:03.621846 1619158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:53:03.631593 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:53:03.649952 1619158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:53:03.656577 1619158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:53:03.656636 1619158 kubeadm.go:392] StartCluster: {Name:flannel-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:flannel-6681
01 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:53:03.656726 1619158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:53:03.656792 1619158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:53:03.706253 1619158 cri.go:89] found id: ""
	I0630 15:53:03.706351 1619158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:53:03.718137 1619158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:53:03.730377 1619158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:53:03.745839 1619158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:53:03.745864 1619158 kubeadm.go:157] found existing configuration files:
	
	I0630 15:53:03.745922 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:53:03.757621 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:53:03.757687 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:53:03.771916 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:53:03.784628 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:53:03.784695 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:53:03.798159 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:53:03.809990 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:53:03.810067 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:53:03.822466 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:53:03.834020 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:53:03.834138 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:53:03.845749 1619158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:53:04.003225 1619158 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:53:04.474834 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:04.495812 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:04.495894 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:04.545620 1612198 cri.go:89] found id: ""
	I0630 15:53:04.545652 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.545664 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:04.545674 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:04.545819 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:04.595168 1612198 cri.go:89] found id: ""
	I0630 15:53:04.595303 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.595325 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:04.595339 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:04.595423 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:04.648158 1612198 cri.go:89] found id: ""
	I0630 15:53:04.648189 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.648201 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:04.648210 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:04.648279 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:04.695407 1612198 cri.go:89] found id: ""
	I0630 15:53:04.695441 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.695452 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:04.695460 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:04.695525 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:04.745024 1612198 cri.go:89] found id: ""
	I0630 15:53:04.745059 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.745072 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:04.745079 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:04.745147 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:04.784238 1612198 cri.go:89] found id: ""
	I0630 15:53:04.784278 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.784291 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:04.784301 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:04.784375 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:04.828921 1612198 cri.go:89] found id: ""
	I0630 15:53:04.828962 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.828976 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:04.828986 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:04.829058 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:04.878950 1612198 cri.go:89] found id: ""
	I0630 15:53:04.878980 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.878992 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:04.879004 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:04.879021 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:04.898852 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:04.898883 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:04.994919 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:04.994955 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:04.994971 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:05.081838 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:05.081891 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:05.134599 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:05.134639 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:03.598543 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:03.599016 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:03.599041 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:03.599011 1620899 retry.go:31] will retry after 1.276009124s: waiting for domain to come up
	I0630 15:53:04.876532 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:04.877133 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:04.877161 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:04.877082 1620899 retry.go:31] will retry after 1.605247273s: waiting for domain to come up
	I0630 15:53:06.483950 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:06.484698 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:06.484730 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:06.484666 1620899 retry.go:31] will retry after 2.436119373s: waiting for domain to come up
	I0630 15:53:07.707840 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:07.724492 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:07.724584 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:07.764489 1612198 cri.go:89] found id: ""
	I0630 15:53:07.764533 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.764545 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:07.764553 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:07.764641 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:07.813734 1612198 cri.go:89] found id: ""
	I0630 15:53:07.813762 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.813771 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:07.813777 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:07.813838 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:07.866385 1612198 cri.go:89] found id: ""
	I0630 15:53:07.866412 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.866420 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:07.866426 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:07.866480 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:07.913274 1612198 cri.go:89] found id: ""
	I0630 15:53:07.913307 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.913317 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:07.913325 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:07.913394 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:07.966418 1612198 cri.go:89] found id: ""
	I0630 15:53:07.966461 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.966475 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:07.966484 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:07.966554 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:08.017379 1612198 cri.go:89] found id: ""
	I0630 15:53:08.017443 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.017457 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:08.017465 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:08.017559 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:08.070396 1612198 cri.go:89] found id: ""
	I0630 15:53:08.070427 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.070440 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:08.070449 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:08.070519 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:08.118074 1612198 cri.go:89] found id: ""
	I0630 15:53:08.118118 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.118132 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:08.118146 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:08.118164 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:08.139695 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:08.139728 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:08.252659 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:08.252683 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:08.252698 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:08.381553 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:08.381602 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:08.448865 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:08.448912 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:11.032838 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:11.059173 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:11.059251 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:11.115790 1612198 cri.go:89] found id: ""
	I0630 15:53:11.115826 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.115839 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:11.115848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:11.115920 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:11.175246 1612198 cri.go:89] found id: ""
	I0630 15:53:11.175295 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.175307 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:11.175316 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:11.175389 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:11.230317 1612198 cri.go:89] found id: ""
	I0630 15:53:11.230349 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.230360 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:11.230368 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:11.230437 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:11.283786 1612198 cri.go:89] found id: ""
	I0630 15:53:11.283827 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.283839 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:11.283848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:11.283927 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:11.334412 1612198 cri.go:89] found id: ""
	I0630 15:53:11.334437 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.334445 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:11.334451 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:11.334508 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:11.399160 1612198 cri.go:89] found id: ""
	I0630 15:53:11.399195 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.399208 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:11.399218 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:11.399307 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:11.461034 1612198 cri.go:89] found id: ""
	I0630 15:53:11.461065 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.461078 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:11.461087 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:11.461144 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:11.509139 1612198 cri.go:89] found id: ""
	I0630 15:53:11.509169 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.509180 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:11.509194 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:11.509217 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:11.560268 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:11.560316 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:11.616198 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:11.616253 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:11.636775 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:11.636820 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:11.735910 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:11.735936 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:11.735954 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:08.922659 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:08.923323 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:08.923356 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:08.923288 1620899 retry.go:31] will retry after 3.297531276s: waiting for domain to come up
	I0630 15:53:12.222353 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:12.223035 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:12.223068 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:12.222990 1620899 retry.go:31] will retry after 3.51443735s: waiting for domain to come up
	I0630 15:53:17.014584 1619158 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 15:53:17.014637 1619158 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:53:17.014706 1619158 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:53:17.014838 1619158 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:53:17.014964 1619158 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 15:53:17.015057 1619158 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:53:17.016771 1619158 out.go:235]   - Generating certificates and keys ...
	I0630 15:53:17.016879 1619158 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:53:17.016954 1619158 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:53:17.017037 1619158 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 15:53:17.017140 1619158 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 15:53:17.017235 1619158 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 15:53:17.017318 1619158 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 15:53:17.017382 1619158 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 15:53:17.017508 1619158 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-668101 localhost] and IPs [192.168.50.164 127.0.0.1 ::1]
	I0630 15:53:17.017557 1619158 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 15:53:17.017714 1619158 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-668101 localhost] and IPs [192.168.50.164 127.0.0.1 ::1]
	I0630 15:53:17.017816 1619158 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 15:53:17.017907 1619158 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 15:53:17.017980 1619158 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 15:53:17.018051 1619158 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:53:17.018104 1619158 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:53:17.018164 1619158 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 15:53:17.018252 1619158 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:53:17.018322 1619158 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:53:17.018382 1619158 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:53:17.018488 1619158 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:53:17.018583 1619158 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:53:17.020268 1619158 out.go:235]   - Booting up control plane ...
	I0630 15:53:17.020370 1619158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:53:17.020449 1619158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:53:17.020523 1619158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:53:17.020623 1619158 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:53:17.020700 1619158 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:53:17.020739 1619158 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:53:17.020859 1619158 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 15:53:17.020953 1619158 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 15:53:17.021008 1619158 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.332284ms
	I0630 15:53:17.021092 1619158 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 15:53:17.021178 1619158 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.50.164:8443/livez
	I0630 15:53:17.021267 1619158 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 15:53:17.021346 1619158 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 15:53:17.021442 1619158 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.131571599s
	I0630 15:53:17.021510 1619158 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.852171886s
	I0630 15:53:17.021568 1619158 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002662518s
	I0630 15:53:17.021665 1619158 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 15:53:17.021773 1619158 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 15:53:17.021830 1619158 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 15:53:17.022015 1619158 kubeadm.go:310] [mark-control-plane] Marking the node flannel-668101 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 15:53:17.022075 1619158 kubeadm.go:310] [bootstrap-token] Using token: ux2a4n.m86z51knn5xjib22
	I0630 15:53:17.023469 1619158 out.go:235]   - Configuring RBAC rules ...
	I0630 15:53:17.023592 1619158 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 15:53:17.023701 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 15:53:17.023848 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 15:53:17.023981 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 15:53:17.024113 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 15:53:17.024200 1619158 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 15:53:17.024304 1619158 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 15:53:17.024347 1619158 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 15:53:17.024396 1619158 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 15:53:17.024424 1619158 kubeadm.go:310] 
	I0630 15:53:17.024503 1619158 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 15:53:17.024510 1619158 kubeadm.go:310] 
	I0630 15:53:17.024574 1619158 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 15:53:17.024580 1619158 kubeadm.go:310] 
	I0630 15:53:17.024600 1619158 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 15:53:17.024654 1619158 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 15:53:17.024696 1619158 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 15:53:17.024705 1619158 kubeadm.go:310] 
	I0630 15:53:17.024750 1619158 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 15:53:17.024756 1619158 kubeadm.go:310] 
	I0630 15:53:17.024799 1619158 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 15:53:17.024805 1619158 kubeadm.go:310] 
	I0630 15:53:17.024848 1619158 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 15:53:17.024952 1619158 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 15:53:17.025026 1619158 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 15:53:17.025033 1619158 kubeadm.go:310] 
	I0630 15:53:17.025114 1619158 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 15:53:17.025179 1619158 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 15:53:17.025185 1619158 kubeadm.go:310] 
	I0630 15:53:17.025258 1619158 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ux2a4n.m86z51knn5xjib22 \
	I0630 15:53:17.025350 1619158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 15:53:17.025370 1619158 kubeadm.go:310] 	--control-plane 
	I0630 15:53:17.025374 1619158 kubeadm.go:310] 
	I0630 15:53:17.025507 1619158 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 15:53:17.025515 1619158 kubeadm.go:310] 
	I0630 15:53:17.025583 1619158 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ux2a4n.m86z51knn5xjib22 \
	I0630 15:53:17.025707 1619158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 15:53:17.025719 1619158 cni.go:84] Creating CNI manager for "flannel"
	I0630 15:53:17.027099 1619158 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0630 15:53:14.327948 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:14.347007 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:14.347078 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:14.391736 1612198 cri.go:89] found id: ""
	I0630 15:53:14.391770 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.391782 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:14.391790 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:14.391855 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:14.438236 1612198 cri.go:89] found id: ""
	I0630 15:53:14.438274 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.438286 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:14.438294 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:14.438381 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:14.479508 1612198 cri.go:89] found id: ""
	I0630 15:53:14.479539 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.479550 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:14.479558 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:14.479618 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:14.530347 1612198 cri.go:89] found id: ""
	I0630 15:53:14.530386 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.530400 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:14.530409 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:14.530480 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:14.576356 1612198 cri.go:89] found id: ""
	I0630 15:53:14.576392 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.576404 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:14.576413 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:14.576491 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:14.627508 1612198 cri.go:89] found id: ""
	I0630 15:53:14.627546 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.627557 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:14.627565 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:14.627636 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:14.674780 1612198 cri.go:89] found id: ""
	I0630 15:53:14.674808 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.674824 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:14.674832 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:14.674899 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:14.717562 1612198 cri.go:89] found id: ""
	I0630 15:53:14.717599 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.717611 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:14.717624 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:14.717655 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:14.801031 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:14.801063 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:14.801083 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:14.890511 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:14.890559 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:14.953255 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:14.953300 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:15.023105 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:15.023160 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:17.543438 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:17.564446 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:17.564545 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:17.602287 1612198 cri.go:89] found id: ""
	I0630 15:53:17.602336 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.602349 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:17.602358 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:17.602449 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:17.643215 1612198 cri.go:89] found id: ""
	I0630 15:53:17.643246 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.643259 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:17.643266 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:17.643328 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:15.813970 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:15.814578 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:15.814693 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:15.814493 1620899 retry.go:31] will retry after 4.330770463s: waiting for domain to come up
	I0630 15:53:17.028285 1619158 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0630 15:53:17.034603 1619158 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.33.2/kubectl ...
	I0630 15:53:17.034627 1619158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0630 15:53:17.064463 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0630 15:53:17.543422 1619158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:53:17.543486 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:17.543598 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-668101 minikube.k8s.io/updated_at=2025_06_30T15_53_17_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=flannel-668101 minikube.k8s.io/primary=true
	I0630 15:53:17.594413 1619158 ops.go:34] apiserver oom_adj: -16
	I0630 15:53:17.727637 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:18.228526 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:18.727798 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:19.227728 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:19.728564 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:20.227759 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:20.728760 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:21.228341 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:21.728419 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:21.856237 1619158 kubeadm.go:1105] duration metric: took 4.312811681s to wait for elevateKubeSystemPrivileges
	I0630 15:53:21.856299 1619158 kubeadm.go:394] duration metric: took 18.199648133s to StartCluster
	I0630 15:53:21.856325 1619158 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:21.856421 1619158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:53:21.857563 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:21.857818 1619158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 15:53:21.857835 1619158 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:53:21.857909 1619158 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:53:21.858018 1619158 addons.go:69] Setting storage-provisioner=true in profile "flannel-668101"
	I0630 15:53:21.858038 1619158 addons.go:238] Setting addon storage-provisioner=true in "flannel-668101"
	I0630 15:53:21.858043 1619158 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:53:21.858037 1619158 addons.go:69] Setting default-storageclass=true in profile "flannel-668101"
	I0630 15:53:21.858077 1619158 host.go:66] Checking if "flannel-668101" exists ...
	I0630 15:53:21.858106 1619158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-668101"
	I0630 15:53:21.858566 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.858573 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.858594 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.858610 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.859497 1619158 out.go:177] * Verifying Kubernetes components...
	I0630 15:53:21.861465 1619158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:21.878756 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0630 15:53:21.879278 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.879431 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0630 15:53:21.879778 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.879797 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.879838 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.880325 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.880347 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.880358 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.880762 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.881385 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.881459 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.881515 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:53:21.885509 1619158 addons.go:238] Setting addon default-storageclass=true in "flannel-668101"
	I0630 15:53:21.885555 1619158 host.go:66] Checking if "flannel-668101" exists ...
	I0630 15:53:21.885936 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.885985 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.903264 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0630 15:53:21.903821 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.904198 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0630 15:53:21.904415 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.904440 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.904784 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.904851 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.905447 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.905503 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.906077 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.906103 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.906550 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.906795 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:53:21.913135 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:53:21.915545 1619158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:53:17.684398 1612198 cri.go:89] found id: ""
	I0630 15:53:17.684474 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.684484 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:17.684493 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:17.684567 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:17.734640 1612198 cri.go:89] found id: ""
	I0630 15:53:17.734681 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.734694 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:17.734702 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:17.734787 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:17.771368 1612198 cri.go:89] found id: ""
	I0630 15:53:17.771404 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.771416 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:17.771425 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:17.771497 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:17.828694 1612198 cri.go:89] found id: ""
	I0630 15:53:17.828724 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.828732 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:17.828741 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:17.828815 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:17.870487 1612198 cri.go:89] found id: ""
	I0630 15:53:17.870535 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.870549 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:17.870558 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:17.870639 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:17.907397 1612198 cri.go:89] found id: ""
	I0630 15:53:17.907430 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.907440 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:17.907451 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:17.907464 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:17.983887 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:17.983934 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:18.027406 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:18.027439 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:18.079092 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:18.079140 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:18.094309 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:18.094345 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:18.168726 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:20.669207 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:20.688479 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:20.688575 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:20.729290 1612198 cri.go:89] found id: ""
	I0630 15:53:20.729317 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.729327 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:20.729334 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:20.729399 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:20.772585 1612198 cri.go:89] found id: ""
	I0630 15:53:20.772606 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.772638 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:20.772647 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:20.772704 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:20.815369 1612198 cri.go:89] found id: ""
	I0630 15:53:20.815407 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.815419 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:20.815428 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:20.815490 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:20.856251 1612198 cri.go:89] found id: ""
	I0630 15:53:20.856282 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.856294 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:20.856304 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:20.856371 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:20.895690 1612198 cri.go:89] found id: ""
	I0630 15:53:20.895723 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.895732 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:20.895743 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:20.895823 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:20.938040 1612198 cri.go:89] found id: ""
	I0630 15:53:20.938075 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.938085 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:20.938094 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:20.938163 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:20.983241 1612198 cri.go:89] found id: ""
	I0630 15:53:20.983280 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.983293 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:20.983302 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:20.983373 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:21.029599 1612198 cri.go:89] found id: ""
	I0630 15:53:21.029633 1612198 logs.go:282] 0 containers: []
	W0630 15:53:21.029645 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:21.029659 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:21.029675 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:21.115729 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:21.115753 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:21.115766 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:21.192780 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:21.192824 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:21.238081 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:21.238141 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:21.298363 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:21.298437 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:20.150210 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.151081 1620744 main.go:141] libmachine: (bridge-668101) found domain IP: 192.168.72.11
	I0630 15:53:20.151108 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has current primary IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.151118 1620744 main.go:141] libmachine: (bridge-668101) reserving static IP address...
	I0630 15:53:20.151802 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find host DHCP lease matching {name: "bridge-668101", mac: "52:54:00:de:25:66", ip: "192.168.72.11"} in network mk-bridge-668101
	I0630 15:53:20.255604 1620744 main.go:141] libmachine: (bridge-668101) reserved static IP address 192.168.72.11 for domain bridge-668101
	I0630 15:53:20.255640 1620744 main.go:141] libmachine: (bridge-668101) waiting for SSH...
	I0630 15:53:20.255651 1620744 main.go:141] libmachine: (bridge-668101) DBG | Getting to WaitForSSH function...
	I0630 15:53:20.259016 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.259553 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.259578 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.259789 1620744 main.go:141] libmachine: (bridge-668101) DBG | Using SSH client type: external
	I0630 15:53:20.259817 1620744 main.go:141] libmachine: (bridge-668101) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa (-rw-------)
	I0630 15:53:20.259855 1620744 main.go:141] libmachine: (bridge-668101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:53:20.259878 1620744 main.go:141] libmachine: (bridge-668101) DBG | About to run SSH command:
	I0630 15:53:20.259893 1620744 main.go:141] libmachine: (bridge-668101) DBG | exit 0
	I0630 15:53:20.389637 1620744 main.go:141] libmachine: (bridge-668101) DBG | SSH cmd err, output: <nil>: 
	I0630 15:53:20.390056 1620744 main.go:141] libmachine: (bridge-668101) KVM machine creation complete
	I0630 15:53:20.390289 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetConfigRaw
	I0630 15:53:20.390852 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:20.391109 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:20.391342 1620744 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:53:20.391357 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:20.392814 1620744 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:53:20.392829 1620744 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:53:20.392834 1620744 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:53:20.392840 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.396358 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.396743 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.396783 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.397085 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.397290 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.397458 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.397650 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.397853 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.398148 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.398164 1620744 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:53:20.508895 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:53:20.508932 1620744 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:53:20.508944 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.512198 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.512629 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.512658 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.512888 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.513085 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.513290 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.513461 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.513609 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.513804 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.513814 1620744 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:53:20.626452 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:53:20.626583 1620744 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:53:20.626595 1620744 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:53:20.626603 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:53:20.626863 1620744 buildroot.go:166] provisioning hostname "bridge-668101"
	I0630 15:53:20.626886 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:53:20.627111 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.630431 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.631000 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.631029 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.631318 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.631539 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.631746 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.631891 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.632041 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.632253 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.632267 1620744 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-668101 && echo "bridge-668101" | sudo tee /etc/hostname
	I0630 15:53:20.768072 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-668101
	
	I0630 15:53:20.768109 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.772078 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.772554 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.772641 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.772981 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.773268 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.773482 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.773700 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.773939 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.774161 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.774183 1620744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-668101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-668101/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-668101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:53:20.912221 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:53:20.912262 1620744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:53:20.912306 1620744 buildroot.go:174] setting up certificates
	I0630 15:53:20.912324 1620744 provision.go:84] configureAuth start
	I0630 15:53:20.912343 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:53:20.912731 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:20.916012 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.916475 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.916519 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.916686 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.919828 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.920293 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.920328 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.920495 1620744 provision.go:143] copyHostCerts
	I0630 15:53:20.920585 1620744 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:53:20.920609 1620744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:53:20.920712 1620744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:53:20.920869 1620744 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:53:20.920882 1620744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:53:20.920919 1620744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:53:20.921008 1620744 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:53:20.921018 1620744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:53:20.921044 1620744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:53:20.921126 1620744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.bridge-668101 san=[127.0.0.1 192.168.72.11 bridge-668101 localhost minikube]
	I0630 15:53:21.264068 1620744 provision.go:177] copyRemoteCerts
	I0630 15:53:21.264165 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:53:21.264213 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.268086 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.268409 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.268452 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.268601 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.268924 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.269110 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.269238 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:21.361451 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:53:21.391187 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 15:53:21.419255 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:53:21.448237 1620744 provision.go:87] duration metric: took 535.893652ms to configureAuth
	I0630 15:53:21.448274 1620744 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:53:21.448476 1620744 config.go:182] Loaded profile config "bridge-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:53:21.448584 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.453284 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.453882 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.453912 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.454135 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.454353 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.454521 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.454680 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.454822 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:21.455051 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:21.455078 1620744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:53:21.715413 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:53:21.715442 1620744 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:53:21.715451 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetURL
	I0630 15:53:21.716819 1620744 main.go:141] libmachine: (bridge-668101) DBG | using libvirt version 6000000
	I0630 15:53:21.719440 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.719824 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.719856 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.719970 1620744 main.go:141] libmachine: Docker is up and running!
	I0630 15:53:21.719983 1620744 main.go:141] libmachine: Reticulating splines...
	I0630 15:53:21.719993 1620744 client.go:171] duration metric: took 25.917938791s to LocalClient.Create
	I0630 15:53:21.720027 1620744 start.go:167] duration metric: took 25.918028738s to libmachine.API.Create "bridge-668101"
	I0630 15:53:21.720040 1620744 start.go:293] postStartSetup for "bridge-668101" (driver="kvm2")
	I0630 15:53:21.720054 1620744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:53:21.720081 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.720445 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:53:21.720475 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.723380 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.723862 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.723895 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.724514 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.724885 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.725127 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.725432 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:21.813595 1620744 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:53:21.818546 1620744 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:53:21.818584 1620744 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:53:21.818645 1620744 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:53:21.818728 1620744 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:53:21.818833 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:53:21.830037 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:53:21.862135 1620744 start.go:296] duration metric: took 142.08086ms for postStartSetup
	I0630 15:53:21.862197 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetConfigRaw
	I0630 15:53:21.862968 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:21.866304 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.866720 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.866752 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.867254 1620744 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/config.json ...
	I0630 15:53:21.867599 1620744 start.go:128] duration metric: took 26.08874701s to createHost
	I0630 15:53:21.867640 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.870855 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.871356 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.871397 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.871563 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.871789 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.871989 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.872148 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.872344 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:21.872607 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:21.872619 1620744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:53:21.990814 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298801.970811827
	
	I0630 15:53:21.990846 1620744 fix.go:216] guest clock: 1751298801.970811827
	I0630 15:53:21.990856 1620744 fix.go:229] Guest: 2025-06-30 15:53:21.970811827 +0000 UTC Remote: 2025-06-30 15:53:21.867622048 +0000 UTC m=+38.958890662 (delta=103.189779ms)
	I0630 15:53:21.990888 1620744 fix.go:200] guest clock delta is within tolerance: 103.189779ms
	I0630 15:53:21.990895 1620744 start.go:83] releasing machines lock for "bridge-668101", held for 26.212259549s
	I0630 15:53:21.990921 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.991256 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:21.994862 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.995334 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.995365 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.995601 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.996174 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.996422 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.996540 1620744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:53:21.996586 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.996665 1620744 ssh_runner.go:195] Run: cat /version.json
	I0630 15:53:21.996697 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:22.000078 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.000431 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:22.000471 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.000574 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.000868 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:22.001096 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:22.001101 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:22.001197 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.001278 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:22.001303 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:22.001484 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:22.001499 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:22.001633 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:22.001809 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:22.115933 1620744 ssh_runner.go:195] Run: systemctl --version
	I0630 15:53:22.124264 1620744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:53:22.297158 1620744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:53:22.303464 1620744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:53:22.303535 1620744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:53:22.322898 1620744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:53:22.322933 1620744 start.go:495] detecting cgroup driver to use...
	I0630 15:53:22.323033 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:53:22.346693 1620744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:53:22.370685 1620744 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:53:22.370799 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:53:22.388014 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:53:22.405538 1620744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:53:22.556327 1620744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:53:22.736266 1620744 docker.go:246] disabling docker service ...
	I0630 15:53:22.736364 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:53:22.755856 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:53:22.773629 1620744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:53:21.916791 1619158 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:21.916818 1619158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 15:53:21.916850 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:53:21.920269 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.920634 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:53:21.920657 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.920814 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:53:21.921063 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.921260 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:53:21.921462 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:53:21.930939 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I0630 15:53:21.931592 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.932329 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.932352 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.932845 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.933076 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:53:21.935023 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:53:21.935343 1619158 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:21.935362 1619158 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 15:53:21.935385 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:53:21.938667 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.939066 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:53:21.939089 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.939228 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:53:21.939438 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.939561 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:53:21.939667 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:53:22.100716 1619158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 15:53:22.185715 1619158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:22.445585 1619158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:22.457596 1619158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:22.671125 1619158 start.go:972] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0630 15:53:22.672317 1619158 node_ready.go:35] waiting up to 15m0s for node "flannel-668101" to be "Ready" ...
	I0630 15:53:22.953479 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:22.953512 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:22.953863 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:22.953868 1619158 main.go:141] libmachine: (flannel-668101) DBG | Closing plugin on server side
	I0630 15:53:22.953885 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:22.953895 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:22.953902 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:22.954132 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:22.954147 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:22.966064 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:22.966091 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:22.966575 1619158 main.go:141] libmachine: (flannel-668101) DBG | Closing plugin on server side
	I0630 15:53:22.966595 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:22.966608 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:23.178366 1619158 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-668101" context rescaled to 1 replicas
	I0630 15:53:23.182951 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:23.182983 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:23.183310 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:23.183341 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:23.183352 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:23.183359 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:23.183771 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:23.183785 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:23.183846 1619158 main.go:141] libmachine: (flannel-668101) DBG | Closing plugin on server side
	I0630 15:53:23.185609 1619158 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0630 15:53:22.968973 1620744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:53:23.133301 1620744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:53:23.155249 1620744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:53:23.183726 1620744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:53:23.183827 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.198004 1620744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:53:23.198112 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.210920 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.223143 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.235289 1620744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:53:23.248292 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.260423 1620744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.280821 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.293185 1620744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:53:23.305009 1620744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:53:23.305155 1620744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:53:23.321828 1620744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:53:23.333118 1620744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:23.476277 1620744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:53:23.585009 1620744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:53:23.585109 1620744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:53:23.590082 1620744 start.go:563] Will wait 60s for crictl version
	I0630 15:53:23.590166 1620744 ssh_runner.go:195] Run: which crictl
	I0630 15:53:23.593975 1620744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:53:23.637313 1620744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:53:23.637475 1620744 ssh_runner.go:195] Run: crio --version
	I0630 15:53:23.668285 1620744 ssh_runner.go:195] Run: crio --version
	I0630 15:53:23.699975 1620744 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:53:23.186948 1619158 addons.go:514] duration metric: took 1.329044999s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0630 15:53:24.675577 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	I0630 15:53:23.816993 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:23.835380 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:23.835460 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:23.877562 1612198 cri.go:89] found id: ""
	I0630 15:53:23.877598 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.877610 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:23.877618 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:23.877695 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:23.919089 1612198 cri.go:89] found id: ""
	I0630 15:53:23.919130 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.919144 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:23.919152 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:23.919232 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:23.964835 1612198 cri.go:89] found id: ""
	I0630 15:53:23.964864 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.964875 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:23.964883 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:23.964956 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:24.011639 1612198 cri.go:89] found id: ""
	I0630 15:53:24.011680 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.011694 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:24.011704 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:24.011791 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:24.059206 1612198 cri.go:89] found id: ""
	I0630 15:53:24.059240 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.059250 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:24.059262 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:24.059335 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:24.116479 1612198 cri.go:89] found id: ""
	I0630 15:53:24.116517 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.116530 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:24.116540 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:24.116619 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:24.164108 1612198 cri.go:89] found id: ""
	I0630 15:53:24.164142 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.164153 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:24.164162 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:24.164235 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:24.232264 1612198 cri.go:89] found id: ""
	I0630 15:53:24.232299 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.232312 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:24.232325 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:24.232343 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:24.334546 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:24.334577 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:24.334597 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:24.450906 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:24.450963 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:24.523317 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:24.523361 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:24.609506 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:24.609547 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:27.134042 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:27.156543 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:27.156635 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:27.206777 1612198 cri.go:89] found id: ""
	I0630 15:53:27.206819 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.206831 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:27.206841 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:27.206924 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:27.257098 1612198 cri.go:89] found id: ""
	I0630 15:53:27.257141 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.257153 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:27.257162 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:27.257226 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:27.311101 1612198 cri.go:89] found id: ""
	I0630 15:53:27.311129 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.311137 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:27.311164 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:27.311233 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:27.356225 1612198 cri.go:89] found id: ""
	I0630 15:53:27.356264 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.356276 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:27.356285 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:27.356446 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:27.408114 1612198 cri.go:89] found id: ""
	I0630 15:53:27.408173 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.408185 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:27.408194 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:27.408264 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:27.453433 1612198 cri.go:89] found id: ""
	I0630 15:53:27.453471 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.453483 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:27.453491 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:27.453560 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:27.502170 1612198 cri.go:89] found id: ""
	I0630 15:53:27.502209 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.502222 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:27.502230 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:27.502304 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:27.539066 1612198 cri.go:89] found id: ""
	I0630 15:53:27.539104 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.539113 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:27.539124 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:27.539157 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:27.557767 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:27.557807 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:27.661895 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:27.661924 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:27.661943 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:23.701364 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:23.704233 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:23.704638 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:23.704669 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:23.704895 1620744 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0630 15:53:23.709158 1620744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:53:23.723315 1620744 kubeadm.go:875] updating cluster {Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:bridge-668
101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:53:23.723444 1620744 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:53:23.723509 1620744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:53:23.763562 1620744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 15:53:23.763659 1620744 ssh_runner.go:195] Run: which lz4
	I0630 15:53:23.769114 1620744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:53:23.774965 1620744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:53:23.775007 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 15:53:25.374857 1620744 crio.go:462] duration metric: took 1.60580082s to copy over tarball
	I0630 15:53:25.374981 1620744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:53:27.865991 1620744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.490972706s)
	I0630 15:53:27.866033 1620744 crio.go:469] duration metric: took 2.491137727s to extract the tarball
	I0630 15:53:27.866044 1620744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:53:27.908959 1620744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:53:27.960351 1620744 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:53:27.960383 1620744 cache_images.go:84] Images are preloaded, skipping loading
	I0630 15:53:27.960392 1620744 kubeadm.go:926] updating node { 192.168.72.11 8443 v1.33.2 crio true true} ...
	I0630 15:53:27.960497 1620744 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-668101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:bridge-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0630 15:53:27.960566 1620744 ssh_runner.go:195] Run: crio config
	I0630 15:53:28.007607 1620744 cni.go:84] Creating CNI manager for "bridge"
	I0630 15:53:28.007639 1620744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:53:28.007668 1620744 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-668101 NodeName:bridge-668101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:53:28.007874 1620744 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-668101"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.11"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:53:28.007956 1620744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 15:53:28.019439 1620744 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:53:28.019533 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:53:28.030681 1620744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0630 15:53:28.054217 1620744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:53:28.078657 1620744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0630 15:53:28.103175 1620744 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0630 15:53:28.107637 1620744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:53:28.121750 1620744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:28.271570 1620744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:28.301805 1620744 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101 for IP: 192.168.72.11
	I0630 15:53:28.301846 1620744 certs.go:194] generating shared ca certs ...
	I0630 15:53:28.301873 1620744 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.302109 1620744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:53:28.302183 1620744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:53:28.302206 1620744 certs.go:256] generating profile certs ...
	I0630 15:53:28.302293 1620744 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.key
	I0630 15:53:28.302316 1620744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt with IP's: []
	I0630 15:53:28.454855 1620744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt ...
	I0630 15:53:28.454891 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: {Name:mk937708224110c3dd03876ac97fd50296fa97e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.455077 1620744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.key ...
	I0630 15:53:28.455095 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.key: {Name:mkabac9afc77f4fa227e818a7db37dc6cde93101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.455181 1620744 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f
	I0630 15:53:28.455199 1620744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.11]
	I0630 15:53:28.535439 1620744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f ...
	I0630 15:53:28.535477 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f: {Name:mkb3d4c341f11f3a902e7d6409776e997bb9f0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.535666 1620744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f ...
	I0630 15:53:28.535680 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f: {Name:mkb836a4b78458ae1ce3c620e0b6b74aca7afa96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.535756 1620744 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt
	I0630 15:53:28.535850 1620744 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key
	I0630 15:53:28.535911 1620744 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key
	I0630 15:53:28.535927 1620744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt with IP's: []
	I0630 15:53:28.888408 1620744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt ...
	I0630 15:53:28.888451 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt: {Name:mkf4d3b4ec0f8a5e1d05a277edfc5ceb8007805d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.888663 1620744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key ...
	I0630 15:53:28.888680 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key: {Name:mk8ac529262f2861b6afd57f5e5bb4e1423ec462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.888902 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:53:28.888952 1620744 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:53:28.888967 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:53:28.889001 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:53:28.889037 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:53:28.889066 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:53:28.889125 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:53:28.889775 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:53:28.927242 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:53:28.967550 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:53:29.017537 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:53:29.055944 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 15:53:29.085822 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0630 15:53:29.183293 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:53:29.217912 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 15:53:29.249508 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:53:29.281853 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:53:29.312083 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:53:29.346274 1620744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:53:29.368862 1620744 ssh_runner.go:195] Run: openssl version
	I0630 15:53:29.376652 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:53:29.391675 1620744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:53:29.396844 1620744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:53:29.396917 1620744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:53:29.404281 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:53:29.417581 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:53:29.430622 1620744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:53:29.436093 1620744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:53:29.436174 1620744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:53:29.443611 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:53:29.457568 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:53:29.471747 1620744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:29.477296 1620744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:29.477380 1620744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:29.485268 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:53:29.498865 1620744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:53:29.504743 1620744 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:53:29.504819 1620744 kubeadm.go:392] StartCluster: {Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:bridge-668101
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:53:29.504990 1620744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:53:29.505114 1620744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:53:29.554378 1620744 cri.go:89] found id: ""
	I0630 15:53:29.554448 1620744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:53:29.566684 1620744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:53:29.580816 1620744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:53:29.594087 1620744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:53:29.594122 1620744 kubeadm.go:157] found existing configuration files:
	
	I0630 15:53:29.594198 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:53:29.606128 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:53:29.606208 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:53:29.617824 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:53:29.628760 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:53:29.628849 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:53:29.643046 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:53:29.654618 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:53:29.654744 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:53:29.670789 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:53:29.686439 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:53:29.686511 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:53:29.701021 1620744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:53:29.759278 1620744 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 15:53:29.759355 1620744 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:53:29.854960 1620744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:53:29.855106 1620744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:53:29.855286 1620744 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 15:53:29.866548 1620744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0630 15:53:27.181869 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	W0630 15:53:29.675930 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	I0630 15:53:27.767088 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:27.767156 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:27.814647 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:27.814683 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:30.372878 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:30.392885 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:30.392993 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:30.450197 1612198 cri.go:89] found id: ""
	I0630 15:53:30.450235 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.450248 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:30.450258 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:30.450342 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:30.507009 1612198 cri.go:89] found id: ""
	I0630 15:53:30.507041 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.507051 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:30.507060 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:30.507147 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:30.554455 1612198 cri.go:89] found id: ""
	I0630 15:53:30.554485 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.554496 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:30.554505 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:30.554572 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:30.598785 1612198 cri.go:89] found id: ""
	I0630 15:53:30.598821 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.598833 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:30.598841 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:30.598911 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:30.634661 1612198 cri.go:89] found id: ""
	I0630 15:53:30.634701 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.634713 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:30.634722 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:30.634794 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:30.674870 1612198 cri.go:89] found id: ""
	I0630 15:53:30.674903 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.674913 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:30.674922 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:30.674984 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:30.715843 1612198 cri.go:89] found id: ""
	I0630 15:53:30.715873 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.715882 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:30.715889 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:30.715947 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:30.752318 1612198 cri.go:89] found id: ""
	I0630 15:53:30.752356 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.752375 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:30.752390 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:30.752406 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:30.824741 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:30.824784 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:30.838605 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:30.838640 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:30.915839 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:30.915924 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:30.915959 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:30.999770 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:30.999820 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:29.943503 1620744 out.go:235]   - Generating certificates and keys ...
	I0630 15:53:29.943673 1620744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:53:29.943767 1620744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:53:30.013369 1620744 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 15:53:30.204256 1620744 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 15:53:30.247370 1620744 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 15:53:30.347086 1620744 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 15:53:30.905210 1620744 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 15:53:30.905417 1620744 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-668101 localhost] and IPs [192.168.72.11 127.0.0.1 ::1]
	I0630 15:53:30.977829 1620744 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 15:53:30.978113 1620744 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-668101 localhost] and IPs [192.168.72.11 127.0.0.1 ::1]
	I0630 15:53:31.175683 1620744 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 15:53:31.342818 1620744 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 15:53:32.050944 1620744 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 15:53:32.051027 1620744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:53:32.176724 1620744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:53:32.249204 1620744 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 15:53:32.600906 1620744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:53:33.139702 1620744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:53:33.541220 1620744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:53:33.541742 1620744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:53:33.544105 1620744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W0630 15:53:31.676642 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	I0630 15:53:32.675850 1619158 node_ready.go:49] node "flannel-668101" is "Ready"
	I0630 15:53:32.675909 1619158 node_ready.go:38] duration metric: took 10.003542336s for node "flannel-668101" to be "Ready" ...
	I0630 15:53:32.675929 1619158 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:53:32.676002 1619158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:32.701943 1619158 api_server.go:72] duration metric: took 10.844066824s to wait for apiserver process to appear ...
	I0630 15:53:32.701974 1619158 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:53:32.701996 1619158 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I0630 15:53:32.706791 1619158 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I0630 15:53:32.708016 1619158 api_server.go:141] control plane version: v1.33.2
	I0630 15:53:32.708046 1619158 api_server.go:131] duration metric: took 6.062225ms to wait for apiserver health ...
	I0630 15:53:32.708058 1619158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:53:32.716053 1619158 system_pods.go:59] 7 kube-system pods found
	I0630 15:53:32.716114 1619158 system_pods.go:61] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:32.716121 1619158 system_pods.go:61] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:32.716130 1619158 system_pods.go:61] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:32.716136 1619158 system_pods.go:61] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:32.716146 1619158 system_pods.go:61] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:32.716151 1619158 system_pods.go:61] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:32.716159 1619158 system_pods.go:61] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:32.716169 1619158 system_pods.go:74] duration metric: took 8.103111ms to wait for pod list to return data ...
	I0630 15:53:32.716184 1619158 default_sa.go:34] waiting for default service account to be created ...
	I0630 15:53:32.721014 1619158 default_sa.go:45] found service account: "default"
	I0630 15:53:32.721045 1619158 default_sa.go:55] duration metric: took 4.852192ms for default service account to be created ...
	I0630 15:53:32.721059 1619158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 15:53:32.729131 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:32.729169 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:32.729178 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:32.729186 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:32.729192 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:32.729197 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:32.729208 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:32.729215 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:32.729252 1619158 retry.go:31] will retry after 311.225306ms: missing components: kube-dns
	I0630 15:53:33.046517 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:33.046552 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:33.046558 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:33.046563 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:33.046567 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:33.046571 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:33.046574 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:33.046578 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:33.046594 1619158 retry.go:31] will retry after 361.483143ms: missing components: kube-dns
	I0630 15:53:33.413105 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:33.413142 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:33.413148 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:33.413154 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:33.413159 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:33.413163 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:33.413171 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:33.413175 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:33.413191 1619158 retry.go:31] will retry after 423.305566ms: missing components: kube-dns
	I0630 15:53:33.853206 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:33.853242 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:33.853259 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:33.853267 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:33.853272 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:33.853277 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:33.853282 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:33.853287 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:33.853305 1619158 retry.go:31] will retry after 554.816826ms: missing components: kube-dns
	I0630 15:53:34.414917 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:34.414989 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:34.415017 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:34.415029 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:34.415036 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:34.415042 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:34.415047 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:34.415057 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:34.415250 1619158 retry.go:31] will retry after 473.364986ms: missing components: kube-dns
	I0630 15:53:34.892811 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:34.892851 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:34.892857 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:34.892863 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:34.892866 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:34.892870 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:34.892873 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:34.892877 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:34.892893 1619158 retry.go:31] will retry after 582.108906ms: missing components: kube-dns
	I0630 15:53:33.553483 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:33.570047 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:33.570150 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:33.616739 1612198 cri.go:89] found id: ""
	I0630 15:53:33.616775 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.616788 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:33.616798 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:33.616865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:33.659234 1612198 cri.go:89] found id: ""
	I0630 15:53:33.659265 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.659277 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:33.659285 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:33.659353 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:33.697938 1612198 cri.go:89] found id: ""
	I0630 15:53:33.697977 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.697989 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:33.697997 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:33.698115 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:33.739043 1612198 cri.go:89] found id: ""
	I0630 15:53:33.739104 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.739118 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:33.739127 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:33.739200 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:33.781947 1612198 cri.go:89] found id: ""
	I0630 15:53:33.781983 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.781994 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:33.782006 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:33.782078 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:33.818201 1612198 cri.go:89] found id: ""
	I0630 15:53:33.818241 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.818254 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:33.818264 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:33.818336 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:33.865630 1612198 cri.go:89] found id: ""
	I0630 15:53:33.865767 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.865806 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:33.865851 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:33.865966 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:33.905740 1612198 cri.go:89] found id: ""
	I0630 15:53:33.905807 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.905821 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:33.905834 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:33.905852 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:33.978403 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:33.978451 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:34.000180 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:34.000225 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:34.077381 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:34.077433 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:34.077451 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:34.158516 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:34.158571 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:36.703046 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:36.725942 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:36.726033 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:36.769910 1612198 cri.go:89] found id: ""
	I0630 15:53:36.770040 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.770066 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:36.770075 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:36.770150 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:36.817303 1612198 cri.go:89] found id: ""
	I0630 15:53:36.817339 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.817350 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:36.817358 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:36.817442 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:36.852676 1612198 cri.go:89] found id: ""
	I0630 15:53:36.852721 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.852734 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:36.852743 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:36.852811 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:36.896796 1612198 cri.go:89] found id: ""
	I0630 15:53:36.896829 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.896840 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:36.896848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:36.896929 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:36.932669 1612198 cri.go:89] found id: ""
	I0630 15:53:36.932708 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.932720 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:36.932729 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:36.932810 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:36.972728 1612198 cri.go:89] found id: ""
	I0630 15:53:36.972762 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.972773 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:36.972781 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:36.972855 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:37.009554 1612198 cri.go:89] found id: ""
	I0630 15:53:37.009594 1612198 logs.go:282] 0 containers: []
	W0630 15:53:37.009605 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:37.009614 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:37.009688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:37.047124 1612198 cri.go:89] found id: ""
	I0630 15:53:37.047163 1612198 logs.go:282] 0 containers: []
	W0630 15:53:37.047175 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:37.047188 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:37.047204 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:37.110372 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:37.110427 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:37.127309 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:37.127352 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:37.196740 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:37.196770 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:37.196793 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:37.284276 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:37.284322 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:33.546215 1620744 out.go:235]   - Booting up control plane ...
	I0630 15:53:33.546374 1620744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:53:33.546471 1620744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:53:33.546551 1620744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:53:33.567048 1620744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:53:33.573691 1620744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:53:33.573744 1620744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:53:33.768543 1620744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 15:53:33.768723 1620744 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 15:53:34.769251 1620744 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001331666s
	I0630 15:53:34.771797 1620744 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 15:53:34.771934 1620744 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.72.11:8443/livez
	I0630 15:53:34.772075 1620744 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 15:53:34.772163 1620744 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 15:53:37.720863 1620744 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.949703734s
	I0630 15:53:38.248441 1620744 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.477609557s
	I0630 15:53:40.275015 1620744 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.504420612s
	I0630 15:53:40.295071 1620744 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 15:53:40.318773 1620744 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 15:53:40.357954 1620744 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 15:53:40.358269 1620744 kubeadm.go:310] [mark-control-plane] Marking the node bridge-668101 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 15:53:40.377248 1620744 kubeadm.go:310] [bootstrap-token] Using token: ay7ggg.v4lz4n8lgdcwzb1z
	I0630 15:53:35.480398 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:35.480445 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:35.480453 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:35.480460 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:35.480466 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:35.480472 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:35.480477 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:35.480481 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:35.480501 1619158 retry.go:31] will retry after 722.350023ms: missing components: kube-dns
	I0630 15:53:36.207319 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:36.207354 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:36.207360 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:36.207367 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:36.207372 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:36.207376 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:36.207379 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:36.207384 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:36.207401 1619158 retry.go:31] will retry after 1.469551324s: missing components: kube-dns
	I0630 15:53:37.682415 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:37.682461 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:37.682470 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:37.682479 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:37.682484 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:37.682491 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:37.682496 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:37.682501 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:37.682522 1619158 retry.go:31] will retry after 1.601843725s: missing components: kube-dns
	I0630 15:53:39.289676 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:39.289721 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:39.289731 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:39.289741 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:39.289748 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:39.289753 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:39.289759 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:39.289763 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:39.289786 1619158 retry.go:31] will retry after 1.660514017s: missing components: kube-dns
	I0630 15:53:40.379081 1620744 out.go:235]   - Configuring RBAC rules ...
	I0630 15:53:40.379262 1620744 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 15:53:40.390839 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 15:53:40.406448 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 15:53:40.414176 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 15:53:40.420005 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 15:53:40.424273 1620744 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 15:53:40.682394 1620744 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 15:53:41.124826 1620744 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 15:53:41.682390 1620744 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 15:53:41.683365 1620744 kubeadm.go:310] 
	I0630 15:53:41.683473 1620744 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 15:53:41.683509 1620744 kubeadm.go:310] 
	I0630 15:53:41.683630 1620744 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 15:53:41.683647 1620744 kubeadm.go:310] 
	I0630 15:53:41.683685 1620744 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 15:53:41.683760 1620744 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 15:53:41.683843 1620744 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 15:53:41.683852 1620744 kubeadm.go:310] 
	I0630 15:53:41.683934 1620744 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 15:53:41.683943 1620744 kubeadm.go:310] 
	I0630 15:53:41.684007 1620744 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 15:53:41.684021 1620744 kubeadm.go:310] 
	I0630 15:53:41.684099 1620744 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 15:53:41.684203 1620744 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 15:53:41.684332 1620744 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 15:53:41.684349 1620744 kubeadm.go:310] 
	I0630 15:53:41.684477 1620744 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 15:53:41.684586 1620744 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 15:53:41.684594 1620744 kubeadm.go:310] 
	I0630 15:53:41.684715 1620744 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ay7ggg.v4lz4n8lgdcwzb1z \
	I0630 15:53:41.684897 1620744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 15:53:41.684947 1620744 kubeadm.go:310] 	--control-plane 
	I0630 15:53:41.684960 1620744 kubeadm.go:310] 
	I0630 15:53:41.685080 1620744 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 15:53:41.685101 1620744 kubeadm.go:310] 
	I0630 15:53:41.685204 1620744 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ay7ggg.v4lz4n8lgdcwzb1z \
	I0630 15:53:41.685345 1620744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 15:53:41.686851 1620744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:53:41.686884 1620744 cni.go:84] Creating CNI manager for "bridge"
	I0630 15:53:41.688726 1620744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 15:53:39.832609 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:39.849706 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:39.849794 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:39.893352 1612198 cri.go:89] found id: ""
	I0630 15:53:39.893391 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.893433 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:39.893442 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:39.893515 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:39.932840 1612198 cri.go:89] found id: ""
	I0630 15:53:39.932868 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.932876 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:39.932890 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:39.932955 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:39.981060 1612198 cri.go:89] found id: ""
	I0630 15:53:39.981097 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.981109 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:39.981117 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:39.981203 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:40.018727 1612198 cri.go:89] found id: ""
	I0630 15:53:40.018768 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.018781 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:40.018790 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:40.018863 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:40.061585 1612198 cri.go:89] found id: ""
	I0630 15:53:40.061627 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.061640 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:40.061649 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:40.061743 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:40.105417 1612198 cri.go:89] found id: ""
	I0630 15:53:40.105448 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.105456 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:40.105464 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:40.105527 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:40.141656 1612198 cri.go:89] found id: ""
	I0630 15:53:40.141686 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.141697 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:40.141705 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:40.141775 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:40.179978 1612198 cri.go:89] found id: ""
	I0630 15:53:40.180011 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.180020 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:40.180029 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:40.180042 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:40.197879 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:40.197924 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:40.271201 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:40.271257 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:40.271277 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:40.355166 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:40.355211 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:40.408985 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:40.409023 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:41.690209 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 15:53:41.702679 1620744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 15:53:41.734200 1620744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:53:41.734327 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:41.734404 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-668101 minikube.k8s.io/updated_at=2025_06_30T15_53_41_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=bridge-668101 minikube.k8s.io/primary=true
	I0630 15:53:41.895628 1620744 ops.go:34] apiserver oom_adj: -16
	I0630 15:53:41.895917 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:42.396198 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:42.896761 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:40.954924 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:40.954967 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:40.954975 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:40.954985 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:40.954990 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:40.954996 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:40.955000 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:40.955005 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:40.955026 1619158 retry.go:31] will retry after 2.638740648s: missing components: kube-dns
	I0630 15:53:43.598079 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:43.598113 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:43.598119 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:43.598126 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:43.598130 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:43.598134 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:43.598137 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:43.598140 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:43.598162 1619158 retry.go:31] will retry after 3.489845888s: missing components: kube-dns
	I0630 15:53:43.396863 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:43.896228 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:44.396818 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:44.896130 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:45.396432 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:45.896985 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:46.000729 1620744 kubeadm.go:1105] duration metric: took 4.266473213s to wait for elevateKubeSystemPrivileges
	I0630 15:53:46.000792 1620744 kubeadm.go:394] duration metric: took 16.495976664s to StartCluster
	I0630 15:53:46.000825 1620744 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:46.000948 1620744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:53:46.002167 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:46.002462 1620744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 15:53:46.002466 1620744 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:53:46.002560 1620744 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:53:46.002667 1620744 addons.go:69] Setting storage-provisioner=true in profile "bridge-668101"
	I0630 15:53:46.002692 1620744 addons.go:238] Setting addon storage-provisioner=true in "bridge-668101"
	I0630 15:53:46.002713 1620744 addons.go:69] Setting default-storageclass=true in profile "bridge-668101"
	I0630 15:53:46.002742 1620744 host.go:66] Checking if "bridge-668101" exists ...
	I0630 15:53:46.002766 1620744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-668101"
	I0630 15:53:46.002725 1620744 config.go:182] Loaded profile config "bridge-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:53:46.003139 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.003182 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.003225 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.003269 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.004052 1620744 out.go:177] * Verifying Kubernetes components...
	I0630 15:53:46.005665 1620744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:46.020307 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0630 15:53:46.021011 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.021601 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.021625 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.021987 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.022574 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.022627 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.026416 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0630 15:53:46.027718 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.028783 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.028829 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.029604 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.029867 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:46.035884 1620744 addons.go:238] Setting addon default-storageclass=true in "bridge-668101"
	I0630 15:53:46.035944 1620744 host.go:66] Checking if "bridge-668101" exists ...
	I0630 15:53:46.036350 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.036409 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.039472 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
	I0630 15:53:46.040012 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.040664 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.040690 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.041066 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.041289 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:46.043282 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:46.045535 1620744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:53:42.967786 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:42.987531 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:42.987625 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:43.023328 1612198 cri.go:89] found id: ""
	I0630 15:53:43.023360 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.023370 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:43.023377 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:43.023449 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:43.059730 1612198 cri.go:89] found id: ""
	I0630 15:53:43.059774 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.059785 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:43.059793 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:43.059875 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:43.100987 1612198 cri.go:89] found id: ""
	I0630 15:53:43.101024 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.101036 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:43.101045 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:43.101118 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:43.139556 1612198 cri.go:89] found id: ""
	I0630 15:53:43.139591 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.139603 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:43.139611 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:43.139669 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:43.177647 1612198 cri.go:89] found id: ""
	I0630 15:53:43.177677 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.177686 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:43.177692 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:43.177749 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:43.214354 1612198 cri.go:89] found id: ""
	I0630 15:53:43.214388 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.214400 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:43.214407 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:43.214475 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:43.254332 1612198 cri.go:89] found id: ""
	I0630 15:53:43.254364 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.254376 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:43.254393 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:43.254459 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:43.292194 1612198 cri.go:89] found id: ""
	I0630 15:53:43.292224 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.292232 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:43.292243 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:43.292255 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:43.345690 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:43.345732 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:43.360155 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:43.360191 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:43.441505 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:43.441537 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:43.441554 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:43.527009 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:43.527063 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:46.069596 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:46.092563 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:46.092646 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:46.132093 1612198 cri.go:89] found id: ""
	I0630 15:53:46.132131 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.132144 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:46.132153 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:46.132225 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:46.175509 1612198 cri.go:89] found id: ""
	I0630 15:53:46.175544 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.175556 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:46.175565 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:46.175647 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:46.225442 1612198 cri.go:89] found id: ""
	I0630 15:53:46.225478 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.225490 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:46.225502 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:46.225573 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:46.275070 1612198 cri.go:89] found id: ""
	I0630 15:53:46.275109 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.275122 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:46.275131 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:46.275206 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:46.320084 1612198 cri.go:89] found id: ""
	I0630 15:53:46.320116 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.320126 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:46.320133 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:46.320198 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:46.360602 1612198 cri.go:89] found id: ""
	I0630 15:53:46.360682 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.360699 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:46.360711 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:46.360818 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:46.404187 1612198 cri.go:89] found id: ""
	I0630 15:53:46.404222 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.404231 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:46.404238 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:46.404304 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:46.457761 1612198 cri.go:89] found id: ""
	I0630 15:53:46.457803 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.457820 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:46.457835 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:46.457855 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:46.524526 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:46.524574 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:46.542938 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:46.542974 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:46.620336 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:46.620372 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:46.620386 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:46.706447 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:46.706496 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:46.047099 1620744 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:46.047127 1620744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 15:53:46.047171 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:46.051881 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.052589 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:46.052618 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.052990 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:46.053240 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:46.053473 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:46.053666 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:46.055796 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0630 15:53:46.056603 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.057196 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.057218 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.057663 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.058201 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.058252 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.078886 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
	I0630 15:53:46.079821 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.080456 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.080484 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.080941 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.081233 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:46.083743 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:46.084008 1620744 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:46.084024 1620744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 15:53:46.084042 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:46.088653 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.089277 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:46.089310 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.089516 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:46.089752 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:46.090006 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:46.090184 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:46.376641 1620744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:46.376679 1620744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 15:53:46.468914 1620744 node_ready.go:35] waiting up to 15m0s for node "bridge-668101" to be "Ready" ...
	I0630 15:53:46.483783 1620744 node_ready.go:49] node "bridge-668101" is "Ready"
	I0630 15:53:46.483830 1620744 node_ready.go:38] duration metric: took 14.870889ms for node "bridge-668101" to be "Ready" ...
	I0630 15:53:46.483849 1620744 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:53:46.483904 1620744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:46.639045 1620744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:46.707352 1620744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:47.223014 1620744 start.go:972] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0630 15:53:47.223081 1620744 api_server.go:72] duration metric: took 1.2205745s to wait for apiserver process to appear ...
	I0630 15:53:47.223099 1620744 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:53:47.223143 1620744 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I0630 15:53:47.223206 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.223233 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.223657 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.223694 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.223705 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.223713 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.223714 1620744 main.go:141] libmachine: (bridge-668101) DBG | Closing plugin on server side
	I0630 15:53:47.223963 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.224017 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.223999 1620744 main.go:141] libmachine: (bridge-668101) DBG | Closing plugin on server side
	I0630 15:53:47.242476 1620744 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I0630 15:53:47.244520 1620744 api_server.go:141] control plane version: v1.33.2
	I0630 15:53:47.244556 1620744 api_server.go:131] duration metric: took 21.449815ms to wait for apiserver health ...
	I0630 15:53:47.244567 1620744 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:53:47.260743 1620744 system_pods.go:59] 7 kube-system pods found
	I0630 15:53:47.260790 1620744 system_pods.go:61] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.260803 1620744 system_pods.go:61] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.260813 1620744 system_pods.go:61] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.260822 1620744 system_pods.go:61] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.260833 1620744 system_pods.go:61] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.260847 1620744 system_pods.go:61] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.260855 1620744 system_pods.go:61] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.260862 1620744 system_pods.go:74] duration metric: took 16.289084ms to wait for pod list to return data ...
	I0630 15:53:47.260873 1620744 default_sa.go:34] waiting for default service account to be created ...
	I0630 15:53:47.265456 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.265485 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.265804 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.265825 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.265828 1620744 main.go:141] libmachine: (bridge-668101) DBG | Closing plugin on server side
	I0630 15:53:47.273837 1620744 default_sa.go:45] found service account: "default"
	I0630 15:53:47.273880 1620744 default_sa.go:55] duration metric: took 12.997202ms for default service account to be created ...
	I0630 15:53:47.273895 1620744 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 15:53:47.345061 1620744 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:47.345113 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.345126 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.345134 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.345144 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.345154 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.345162 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.345175 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.345223 1620744 retry.go:31] will retry after 281.101886ms: missing components: kube-dns
	I0630 15:53:47.638563 1620744 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:47.638608 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.638620 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.638628 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.638637 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.638647 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.638656 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.638663 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.638680 1620744 retry.go:31] will retry after 257.359626ms: missing components: kube-dns
	I0630 15:53:47.705752 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.705779 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.706118 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.706145 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.706176 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.706184 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.706445 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.706459 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.709137 1620744 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0630 15:53:47.710580 1620744 addons.go:514] duration metric: took 1.708021313s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0630 15:53:47.727425 1620744 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-668101" context rescaled to 1 replicas
	I0630 15:53:47.901617 1620744 system_pods.go:86] 8 kube-system pods found
	I0630 15:53:47.901662 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.901673 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.901680 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.901689 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.901699 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.901705 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.901716 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.901729 1620744 system_pods.go:89] "storage-provisioner" [d39eade7-d69c-4ba1-871c-9d22e90f3162] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:47.901756 1620744 retry.go:31] will retry after 361.046684ms: missing components: kube-dns
	I0630 15:53:47.092203 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:47.092247 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Running
	I0630 15:53:47.092256 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:47.092261 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:47.092266 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:47.092272 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:47.092279 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:47.092285 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:47.092297 1619158 system_pods.go:126] duration metric: took 14.371230346s to wait for k8s-apps to be running ...
	I0630 15:53:47.092315 1619158 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 15:53:47.092395 1619158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:53:47.107330 1619158 system_svc.go:56] duration metric: took 14.999723ms WaitForService to wait for kubelet
	I0630 15:53:47.107386 1619158 kubeadm.go:578] duration metric: took 25.24951704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:53:47.107425 1619158 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:53:47.111477 1619158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:53:47.111513 1619158 node_conditions.go:123] node cpu capacity is 2
	I0630 15:53:47.111531 1619158 node_conditions.go:105] duration metric: took 4.099412ms to run NodePressure ...
	I0630 15:53:47.111548 1619158 start.go:241] waiting for startup goroutines ...
	I0630 15:53:47.111557 1619158 start.go:246] waiting for cluster config update ...
	I0630 15:53:47.111572 1619158 start.go:255] writing updated cluster config ...
	I0630 15:53:47.111942 1619158 ssh_runner.go:195] Run: rm -f paused
	I0630 15:53:47.118482 1619158 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:47.122226 1619158 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-zlnjm" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.126835 1619158 pod_ready.go:94] pod "coredns-674b8bbfcf-zlnjm" is "Ready"
	I0630 15:53:47.126873 1619158 pod_ready.go:86] duration metric: took 4.619265ms for pod "coredns-674b8bbfcf-zlnjm" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.129263 1619158 pod_ready.go:83] waiting for pod "etcd-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.133727 1619158 pod_ready.go:94] pod "etcd-flannel-668101" is "Ready"
	I0630 15:53:47.133762 1619158 pod_ready.go:86] duration metric: took 4.469718ms for pod "etcd-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.135699 1619158 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.140237 1619158 pod_ready.go:94] pod "kube-apiserver-flannel-668101" is "Ready"
	I0630 15:53:47.140273 1619158 pod_ready.go:86] duration metric: took 4.536145ms for pod "kube-apiserver-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.143805 1619158 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.524212 1619158 pod_ready.go:94] pod "kube-controller-manager-flannel-668101" is "Ready"
	I0630 15:53:47.524250 1619158 pod_ready.go:86] duration metric: took 380.412398ms for pod "kube-controller-manager-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.723808 1619158 pod_ready.go:83] waiting for pod "kube-proxy-fl9rb" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.122925 1619158 pod_ready.go:94] pod "kube-proxy-fl9rb" is "Ready"
	I0630 15:53:48.122960 1619158 pod_ready.go:86] duration metric: took 399.120603ms for pod "kube-proxy-fl9rb" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.323641 1619158 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.722788 1619158 pod_ready.go:94] pod "kube-scheduler-flannel-668101" is "Ready"
	I0630 15:53:48.722822 1619158 pod_ready.go:86] duration metric: took 399.155106ms for pod "kube-scheduler-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.722836 1619158 pod_ready.go:40] duration metric: took 1.604308968s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:48.771506 1619158 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:53:48.774098 1619158 out.go:177] * Done! kubectl is now configured to use "flannel-668101" cluster and "default" namespace by default
	I0630 15:53:49.256833 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:49.276256 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:49.276328 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:49.326292 1612198 cri.go:89] found id: ""
	I0630 15:53:49.326327 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.326339 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:49.326356 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:49.326427 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:49.371428 1612198 cri.go:89] found id: ""
	I0630 15:53:49.371486 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.371496 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:49.371503 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:49.371568 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:49.415763 1612198 cri.go:89] found id: ""
	I0630 15:53:49.415840 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.415855 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:49.415864 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:49.415927 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:49.456276 1612198 cri.go:89] found id: ""
	I0630 15:53:49.456313 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.456324 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:49.456332 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:49.456421 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:49.496696 1612198 cri.go:89] found id: ""
	I0630 15:53:49.496735 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.496753 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:49.496762 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:49.496819 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:49.537728 1612198 cri.go:89] found id: ""
	I0630 15:53:49.537763 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.537771 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:49.537778 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:49.537837 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:49.575693 1612198 cri.go:89] found id: ""
	I0630 15:53:49.575725 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.575734 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:49.575740 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:49.575795 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:49.617896 1612198 cri.go:89] found id: ""
	I0630 15:53:49.617931 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.617941 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:49.617967 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:49.617986 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:49.668327 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:49.668372 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:49.721223 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:49.721270 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:49.737061 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:49.737094 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:49.814464 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:49.814490 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:49.814503 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:52.393329 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:52.409925 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:52.410010 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:52.446622 1612198 cri.go:89] found id: ""
	I0630 15:53:52.446659 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.446673 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:52.446684 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:52.446769 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:52.493894 1612198 cri.go:89] found id: ""
	I0630 15:53:52.493929 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.493940 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:52.493947 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:52.494012 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:52.530891 1612198 cri.go:89] found id: ""
	I0630 15:53:52.530943 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.530956 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:52.530965 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:52.531141 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:52.569016 1612198 cri.go:89] found id: ""
	I0630 15:53:52.569046 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.569054 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:52.569068 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:52.569144 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:52.607137 1612198 cri.go:89] found id: ""
	I0630 15:53:52.607176 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.607186 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:52.607194 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:52.607264 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:52.655286 1612198 cri.go:89] found id: ""
	I0630 15:53:52.655334 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.655343 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:52.655350 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:52.655420 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:48.266876 1620744 system_pods.go:86] 8 kube-system pods found
	I0630 15:53:48.266910 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:48.266917 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:48.266923 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:48.266928 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:48.266936 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:48.266940 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:48.266944 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:48.266949 1620744 system_pods.go:89] "storage-provisioner" [d39eade7-d69c-4ba1-871c-9d22e90f3162] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:48.266959 1620744 system_pods.go:126] duration metric: took 993.056385ms to wait for k8s-apps to be running ...
	I0630 15:53:48.266967 1620744 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 15:53:48.267016 1620744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:53:48.282778 1620744 system_svc.go:56] duration metric: took 15.79609ms WaitForService to wait for kubelet
	I0630 15:53:48.282832 1620744 kubeadm.go:578] duration metric: took 2.28032496s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:53:48.282860 1620744 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:53:48.286721 1620744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:53:48.286750 1620744 node_conditions.go:123] node cpu capacity is 2
	I0630 15:53:48.286764 1620744 node_conditions.go:105] duration metric: took 3.897099ms to run NodePressure ...
	I0630 15:53:48.286777 1620744 start.go:241] waiting for startup goroutines ...
	I0630 15:53:48.286784 1620744 start.go:246] waiting for cluster config update ...
	I0630 15:53:48.286794 1620744 start.go:255] writing updated cluster config ...
	I0630 15:53:48.287052 1620744 ssh_runner.go:195] Run: rm -f paused
	I0630 15:53:48.292293 1620744 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:48.297080 1620744 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-hggsr" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:53:50.309473 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-hggsr" is not "Ready", error: <nil>
	W0630 15:53:52.803327 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-hggsr" is not "Ready", error: <nil>
	I0630 15:53:52.693017 1612198 cri.go:89] found id: ""
	I0630 15:53:52.693053 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.693066 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:52.693093 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:52.693156 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:52.729639 1612198 cri.go:89] found id: ""
	I0630 15:53:52.729674 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.729685 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:52.729713 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:52.729731 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:52.744808 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:52.744846 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:52.818006 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:52.818076 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:52.818095 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:52.913720 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:52.913794 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:52.955851 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:52.955898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:55.506514 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:55.523943 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:55.524024 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:55.562846 1612198 cri.go:89] found id: ""
	I0630 15:53:55.562884 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.562893 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:55.562900 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:55.562960 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:55.601862 1612198 cri.go:89] found id: ""
	I0630 15:53:55.601895 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.601907 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:55.601915 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:55.601988 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:55.650904 1612198 cri.go:89] found id: ""
	I0630 15:53:55.650946 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.650958 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:55.650968 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:55.651051 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:55.695050 1612198 cri.go:89] found id: ""
	I0630 15:53:55.695081 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.695089 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:55.695096 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:55.695167 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:55.732863 1612198 cri.go:89] found id: ""
	I0630 15:53:55.732904 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.732917 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:55.732925 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:55.732997 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:55.772221 1612198 cri.go:89] found id: ""
	I0630 15:53:55.772254 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.772265 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:55.772275 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:55.772349 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:55.811091 1612198 cri.go:89] found id: ""
	I0630 15:53:55.811134 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.811146 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:55.811154 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:55.811213 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:55.846273 1612198 cri.go:89] found id: ""
	I0630 15:53:55.846313 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.846338 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:55.846352 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:55.846370 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:55.921797 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:55.921845 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:55.963517 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:55.963553 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:56.023942 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:56.023988 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:56.038647 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:56.038687 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:56.119572 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0630 15:53:55.303307 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-hggsr" is not "Ready", error: <nil>
	I0630 15:53:55.805200 1620744 pod_ready.go:94] pod "coredns-674b8bbfcf-hggsr" is "Ready"
	I0630 15:53:55.805235 1620744 pod_ready.go:86] duration metric: took 7.508115108s for pod "coredns-674b8bbfcf-hggsr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:55.805249 1620744 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:53:57.811769 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-qt9bv" is not "Ready", error: <nil>
	I0630 15:53:58.309220 1620744 pod_ready.go:99] pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace is gone: getting pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace (will retry): pods "coredns-674b8bbfcf-qt9bv" not found
	I0630 15:53:58.309253 1620744 pod_ready.go:86] duration metric: took 2.5039962s for pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.311407 1620744 pod_ready.go:83] waiting for pod "etcd-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.315815 1620744 pod_ready.go:94] pod "etcd-bridge-668101" is "Ready"
	I0630 15:53:58.315845 1620744 pod_ready.go:86] duration metric: took 4.413088ms for pod "etcd-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.317890 1620744 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.321951 1620744 pod_ready.go:94] pod "kube-apiserver-bridge-668101" is "Ready"
	I0630 15:53:58.322004 1620744 pod_ready.go:86] duration metric: took 4.070763ms for pod "kube-apiserver-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.325941 1620744 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.330240 1620744 pod_ready.go:94] pod "kube-controller-manager-bridge-668101" is "Ready"
	I0630 15:53:58.330273 1620744 pod_ready.go:86] duration metric: took 4.307436ms for pod "kube-controller-manager-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.509388 1620744 pod_ready.go:83] waiting for pod "kube-proxy-q2tjj" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.911133 1620744 pod_ready.go:94] pod "kube-proxy-q2tjj" is "Ready"
	I0630 15:53:58.911181 1620744 pod_ready.go:86] duration metric: took 401.753348ms for pod "kube-proxy-q2tjj" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:59.110354 1620744 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:59.509728 1620744 pod_ready.go:94] pod "kube-scheduler-bridge-668101" is "Ready"
	I0630 15:53:59.509764 1620744 pod_ready.go:86] duration metric: took 399.372679ms for pod "kube-scheduler-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:59.509778 1620744 pod_ready.go:40] duration metric: took 11.217429269s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:59.557513 1620744 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:53:59.559079 1620744 out.go:177] * Done! kubectl is now configured to use "bridge-668101" cluster and "default" namespace by default
	I0630 15:53:58.620232 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:58.638119 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:58.638194 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:58.674101 1612198 cri.go:89] found id: ""
	I0630 15:53:58.674160 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.674175 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:58.674184 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:58.674259 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:58.712115 1612198 cri.go:89] found id: ""
	I0630 15:53:58.712167 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.712179 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:58.712192 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:58.712261 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:58.766961 1612198 cri.go:89] found id: ""
	I0630 15:53:58.767004 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.767016 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:58.767025 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:58.767114 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:58.817233 1612198 cri.go:89] found id: ""
	I0630 15:53:58.817274 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.817286 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:58.817297 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:58.817379 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:58.858728 1612198 cri.go:89] found id: ""
	I0630 15:53:58.858757 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.858774 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:58.858784 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:58.858842 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:58.900041 1612198 cri.go:89] found id: ""
	I0630 15:53:58.900082 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.900094 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:58.900102 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:58.900176 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:58.944995 1612198 cri.go:89] found id: ""
	I0630 15:53:58.945026 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.945037 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:58.945046 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:58.945110 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:58.987156 1612198 cri.go:89] found id: ""
	I0630 15:53:58.987204 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.987216 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:58.987233 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:58.987252 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:59.054774 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:59.054821 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:59.071556 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:59.071601 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:59.144600 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:59.144631 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:59.144644 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:59.218471 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:59.218519 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:01.761632 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:01.781793 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:01.781885 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:01.834337 1612198 cri.go:89] found id: ""
	I0630 15:54:01.834370 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.834381 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:01.834390 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:01.834456 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:01.879488 1612198 cri.go:89] found id: ""
	I0630 15:54:01.879528 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.879542 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:01.879552 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:01.879629 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:01.919612 1612198 cri.go:89] found id: ""
	I0630 15:54:01.919656 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.919671 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:01.919681 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:01.919755 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:01.959025 1612198 cri.go:89] found id: ""
	I0630 15:54:01.959108 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.959118 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:01.959126 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:01.959213 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:02.004157 1612198 cri.go:89] found id: ""
	I0630 15:54:02.004193 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.004207 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:02.004216 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:02.004293 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:02.041453 1612198 cri.go:89] found id: ""
	I0630 15:54:02.041488 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.041496 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:02.041503 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:02.041573 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:02.092760 1612198 cri.go:89] found id: ""
	I0630 15:54:02.092801 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.092814 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:02.092824 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:02.092894 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:02.130937 1612198 cri.go:89] found id: ""
	I0630 15:54:02.130976 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.130985 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:02.130996 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:02.131076 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:02.186285 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:02.186333 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:02.203252 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:02.203283 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:02.274788 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:02.274820 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:02.274836 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:02.354791 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:02.354835 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:04.902714 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:04.922560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:04.922631 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:04.961257 1612198 cri.go:89] found id: ""
	I0630 15:54:04.961291 1612198 logs.go:282] 0 containers: []
	W0630 15:54:04.961302 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:04.961312 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:04.961388 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:04.997894 1612198 cri.go:89] found id: ""
	I0630 15:54:04.997927 1612198 logs.go:282] 0 containers: []
	W0630 15:54:04.997936 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:04.997942 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:04.998007 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:05.038875 1612198 cri.go:89] found id: ""
	I0630 15:54:05.038923 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.038936 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:05.038945 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:05.039035 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:05.080082 1612198 cri.go:89] found id: ""
	I0630 15:54:05.080123 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.080135 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:05.080145 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:05.080205 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:05.117322 1612198 cri.go:89] found id: ""
	I0630 15:54:05.117358 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.117371 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:05.117378 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:05.117469 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:05.172542 1612198 cri.go:89] found id: ""
	I0630 15:54:05.172578 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.172589 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:05.172598 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:05.172666 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:05.220246 1612198 cri.go:89] found id: ""
	I0630 15:54:05.220280 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.220291 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:05.220299 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:05.220365 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:05.279486 1612198 cri.go:89] found id: ""
	I0630 15:54:05.279521 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.279533 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:05.279548 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:05.279564 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:05.341677 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:05.341734 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:05.359513 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:05.359566 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:05.445100 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:05.445128 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:05.445144 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:05.552812 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:05.552883 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:08.098433 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:08.115865 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:08.115985 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:08.155035 1612198 cri.go:89] found id: ""
	I0630 15:54:08.155077 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.155092 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:08.155103 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:08.155173 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:08.192666 1612198 cri.go:89] found id: ""
	I0630 15:54:08.192702 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.192711 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:08.192719 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:08.192791 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:08.234681 1612198 cri.go:89] found id: ""
	I0630 15:54:08.234710 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.234718 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:08.234723 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:08.234782 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:08.271666 1612198 cri.go:89] found id: ""
	I0630 15:54:08.271699 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.271707 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:08.271714 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:08.271769 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:08.309335 1612198 cri.go:89] found id: ""
	I0630 15:54:08.309366 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.309375 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:08.309381 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:08.309471 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:08.351248 1612198 cri.go:89] found id: ""
	I0630 15:54:08.351284 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.351296 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:08.351305 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:08.351384 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:08.386803 1612198 cri.go:89] found id: ""
	I0630 15:54:08.386833 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.386843 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:08.386851 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:08.386922 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:08.434407 1612198 cri.go:89] found id: ""
	I0630 15:54:08.434442 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.434451 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:08.434461 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:08.434474 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:08.510981 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:08.511009 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:08.511028 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:08.590361 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:08.590426 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:08.634603 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:08.634636 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:08.687291 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:08.687339 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:11.202732 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:11.228516 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:11.228589 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:11.307836 1612198 cri.go:89] found id: ""
	I0630 15:54:11.307870 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.307882 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:11.307890 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:11.307973 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:11.359347 1612198 cri.go:89] found id: ""
	I0630 15:54:11.359380 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.359400 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:11.359408 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:11.359467 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:11.414423 1612198 cri.go:89] found id: ""
	I0630 15:54:11.414469 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.414479 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:11.414486 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:11.414549 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:11.457669 1612198 cri.go:89] found id: ""
	I0630 15:54:11.457704 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.457722 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:11.457735 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:11.457804 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:11.511061 1612198 cri.go:89] found id: ""
	I0630 15:54:11.511131 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.511147 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:11.511159 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:11.511345 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:11.557886 1612198 cri.go:89] found id: ""
	I0630 15:54:11.557923 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.557936 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:11.557946 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:11.558014 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:11.603894 1612198 cri.go:89] found id: ""
	I0630 15:54:11.603926 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.603938 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:11.603946 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:11.604016 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:11.652115 1612198 cri.go:89] found id: ""
	I0630 15:54:11.652147 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.652156 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:11.652165 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:11.652177 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:11.700550 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:11.700588 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:11.761044 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:11.761088 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:11.779581 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:11.779669 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:11.872983 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:11.873013 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:11.873040 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:14.469180 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:14.488438 1612198 kubeadm.go:593] duration metric: took 4m4.858627578s to restartPrimaryControlPlane
	W0630 15:54:14.488521 1612198 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0630 15:54:14.488557 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:54:16.362367 1612198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.873774715s)
	I0630 15:54:16.362472 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:54:16.381754 1612198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:54:16.394832 1612198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:54:16.407997 1612198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:54:16.408022 1612198 kubeadm.go:157] found existing configuration files:
	
	I0630 15:54:16.408088 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:54:16.420299 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:54:16.420374 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:54:16.432689 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:54:16.450141 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:54:16.450232 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:54:16.466230 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:54:16.478725 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:54:16.478810 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:54:16.491926 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:54:16.503661 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:54:16.503754 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:54:16.516000 1612198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:54:16.604779 1612198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:54:16.604866 1612198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:54:16.771725 1612198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:54:16.771885 1612198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:54:16.772009 1612198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:54:17.000568 1612198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:54:17.002768 1612198 out.go:235]   - Generating certificates and keys ...
	I0630 15:54:17.007633 1612198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:54:17.007744 1612198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:54:17.007835 1612198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:54:17.007906 1612198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:54:17.007987 1612198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:54:17.008050 1612198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:54:17.008130 1612198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:54:17.008216 1612198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:54:17.008304 1612198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:54:17.008429 1612198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:54:17.008479 1612198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:54:17.008545 1612198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:54:17.091062 1612198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:54:17.216540 1612198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:54:17.314609 1612198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:54:17.399588 1612198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:54:17.417749 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:54:17.418852 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:54:17.418923 1612198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:54:17.631341 1612198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:54:17.633197 1612198 out.go:235]   - Booting up control plane ...
	I0630 15:54:17.633340 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:54:17.639557 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:54:17.642269 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:54:17.646155 1612198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:54:17.647610 1612198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:54:57.647972 1612198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:54:57.648456 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:54:57.648704 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:02.649537 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:02.649775 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:12.650265 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:12.650526 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:32.650986 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:32.651250 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:12.652241 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:12.652569 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:12.652621 1612198 kubeadm.go:310] 
	I0630 15:56:12.652681 1612198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:56:12.652741 1612198 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:56:12.652751 1612198 kubeadm.go:310] 
	I0630 15:56:12.652778 1612198 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:56:12.652814 1612198 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:56:12.652960 1612198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:56:12.652983 1612198 kubeadm.go:310] 
	I0630 15:56:12.653129 1612198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:56:12.653192 1612198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:56:12.653257 1612198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:56:12.653270 1612198 kubeadm.go:310] 
	I0630 15:56:12.653457 1612198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:56:12.653585 1612198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:56:12.653603 1612198 kubeadm.go:310] 
	I0630 15:56:12.653767 1612198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:56:12.653893 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:56:12.654008 1612198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:56:12.654137 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:56:12.654157 1612198 kubeadm.go:310] 
	I0630 15:56:12.655912 1612198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:56:12.655994 1612198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:56:12.656047 1612198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0630 15:56:12.656312 1612198 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0630 15:56:12.656390 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:56:13.118145 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:56:13.137252 1612198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:56:13.148791 1612198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:56:13.148814 1612198 kubeadm.go:157] found existing configuration files:
	
	I0630 15:56:13.148866 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:56:13.159734 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:56:13.159815 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:56:13.170810 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:56:13.181716 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:56:13.181794 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:56:13.193772 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:56:13.204825 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:56:13.204895 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:56:13.216418 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:56:13.227545 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:56:13.227620 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:56:13.239663 1612198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:56:13.314550 1612198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:56:13.314640 1612198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:56:13.462367 1612198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:56:13.462550 1612198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:56:13.462695 1612198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:56:13.649387 1612198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:56:13.651840 1612198 out.go:235]   - Generating certificates and keys ...
	I0630 15:56:13.651943 1612198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:56:13.652047 1612198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:56:13.652179 1612198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:56:13.652262 1612198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:56:13.652381 1612198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:56:13.652486 1612198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:56:13.652658 1612198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:56:13.652726 1612198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:56:13.652788 1612198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:56:13.652876 1612198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:56:13.652930 1612198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:56:13.653009 1612198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:56:13.920791 1612198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:56:14.049695 1612198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:56:14.213882 1612198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:56:14.469969 1612198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:56:14.493927 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:56:14.496121 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:56:14.496179 1612198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:56:14.667471 1612198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:56:14.669824 1612198 out.go:235]   - Booting up control plane ...
	I0630 15:56:14.670005 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:56:14.673040 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:56:14.674211 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:56:14.675608 1612198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:56:14.680984 1612198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:56:54.682952 1612198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:56:54.683551 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:54.683769 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:59.684143 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:59.684406 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:57:09.685091 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:57:09.685374 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:57:29.686408 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:57:29.686681 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:58:09.688249 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:58:09.688537 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:58:09.688564 1612198 kubeadm.go:310] 
	I0630 15:58:09.688620 1612198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:58:09.688672 1612198 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:58:09.688681 1612198 kubeadm.go:310] 
	I0630 15:58:09.688721 1612198 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:58:09.688774 1612198 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:58:09.688912 1612198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:58:09.688921 1612198 kubeadm.go:310] 
	I0630 15:58:09.689114 1612198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:58:09.689178 1612198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:58:09.689250 1612198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:58:09.689265 1612198 kubeadm.go:310] 
	I0630 15:58:09.689442 1612198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:58:09.689568 1612198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:58:09.689580 1612198 kubeadm.go:310] 
	I0630 15:58:09.689730 1612198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:58:09.689812 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:58:09.689888 1612198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:58:09.689950 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:58:09.689957 1612198 kubeadm.go:310] 
	I0630 15:58:09.692282 1612198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:58:09.692363 1612198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:58:09.692431 1612198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0630 15:58:09.692497 1612198 kubeadm.go:394] duration metric: took 8m0.118278148s to StartCluster
	I0630 15:58:09.692554 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:58:09.692626 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:58:09.732128 1612198 cri.go:89] found id: ""
	I0630 15:58:09.732169 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.732178 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:58:09.732185 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:58:09.732247 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:58:09.764993 1612198 cri.go:89] found id: ""
	I0630 15:58:09.765024 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.765034 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:58:09.765042 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:58:09.765112 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:58:09.800767 1612198 cri.go:89] found id: ""
	I0630 15:58:09.800809 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.800820 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:58:09.800828 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:58:09.800888 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:58:09.834514 1612198 cri.go:89] found id: ""
	I0630 15:58:09.834544 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.834553 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:58:09.834560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:58:09.834636 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:58:09.867918 1612198 cri.go:89] found id: ""
	I0630 15:58:09.867946 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.867955 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:58:09.867962 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:58:09.868016 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:58:09.908166 1612198 cri.go:89] found id: ""
	I0630 15:58:09.908199 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.908208 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:58:09.908215 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:58:09.908275 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:58:09.941613 1612198 cri.go:89] found id: ""
	I0630 15:58:09.941649 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.941658 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:58:09.941665 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:58:09.941721 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:58:09.983579 1612198 cri.go:89] found id: ""
	I0630 15:58:09.983617 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.983626 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:58:09.983637 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:58:09.983652 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:58:10.041447 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:58:10.041506 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:58:10.055597 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:58:10.055633 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:58:10.125308 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:58:10.125345 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:58:10.125363 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:58:10.231871 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:58:10.231919 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0630 15:58:10.270513 1612198 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0630 15:58:10.270594 1612198 out.go:270] * 
	W0630 15:58:10.270682 1612198 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:58:10.270703 1612198 out.go:270] * 
	W0630 15:58:10.272423 1612198 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0630 15:58:10.276013 1612198 out.go:201] 
	W0630 15:58:10.277283 1612198 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:58:10.277328 1612198 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0630 15:58:10.277358 1612198 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0630 15:58:10.279010 1612198 out.go:201] 
	
	
	==> CRI-O <==
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.361802714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299633361782164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59bafcf2-0acd-4beb-a890-9feb1e539983 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.362325684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98ca0aaa-3bf1-4679-b2bc-6f067add2d39 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.362395086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98ca0aaa-3bf1-4679-b2bc-6f067add2d39 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.362428869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=98ca0aaa-3bf1-4679-b2bc-6f067add2d39 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.393867408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac8b13b8-023b-433a-9361-9a1f3c9d44d1 name=/runtime.v1.RuntimeService/Version
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.393936719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac8b13b8-023b-433a-9361-9a1f3c9d44d1 name=/runtime.v1.RuntimeService/Version
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.395301931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d17399ba-ab0e-48d2-b1eb-6929dfe39f7b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.395649839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299633395632975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d17399ba-ab0e-48d2-b1eb-6929dfe39f7b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.396169190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcd80ffc-3bb3-4f0d-b01d-2d92edc621d3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.396211418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcd80ffc-3bb3-4f0d-b01d-2d92edc621d3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.396237812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dcd80ffc-3bb3-4f0d-b01d-2d92edc621d3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.428192648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f15ce12-613a-48e3-892a-0c196e2411a1 name=/runtime.v1.RuntimeService/Version
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.428257688Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f15ce12-613a-48e3-892a-0c196e2411a1 name=/runtime.v1.RuntimeService/Version
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.429340778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0348a78-e4a9-4206-8eac-134d2d0075fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.429706232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299633429684766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0348a78-e4a9-4206-8eac-134d2d0075fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.430289907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cafd2ee8-43e8-4a43-901a-c8519d2ddabe name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.430331365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cafd2ee8-43e8-4a43-901a-c8519d2ddabe name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.430365882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cafd2ee8-43e8-4a43-901a-c8519d2ddabe name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.463546150Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9dd5b777-6888-4165-b439-62e3e8810ebe name=/runtime.v1.RuntimeService/Version
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.463615652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9dd5b777-6888-4165-b439-62e3e8810ebe name=/runtime.v1.RuntimeService/Version
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.464882587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6cb2b77-ded9-479e-a7b3-03d4223e5ff2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.465373057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299633465342312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6cb2b77-ded9-479e-a7b3-03d4223e5ff2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.466099329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2dce364a-56ae-43e7-912f-c6d26148c2a0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.466150710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2dce364a-56ae-43e7-912f-c6d26148c2a0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:07:13 old-k8s-version-836310 crio[829]: time="2025-06-30 16:07:13.466182303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2dce364a-56ae-43e7-912f-c6d26148c2a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun30 15:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001300] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004063] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.051769] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun30 15:50] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108667] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.116066] kauditd_printk_skb: 46 callbacks suppressed
	[Jun30 15:56] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 16:07:13 up 17 min,  0 users,  load average: 0.00, 0.03, 0.03
	Linux old-k8s-version-836310 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/informers.(*sharedInformerFactory).Start
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:134 +0x191
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]: goroutine 152 [runnable]:
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000ba81c0, 0xc0001000c0)
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]: goroutine 133 [select]:
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0001356d0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000ab4240, 0x0, 0x0)
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000853180)
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jun 30 16:07:09 old-k8s-version-836310 kubelet[7884]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jun 30 16:07:09 old-k8s-version-836310 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 30 16:07:09 old-k8s-version-836310 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 30 16:07:10 old-k8s-version-836310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jun 30 16:07:10 old-k8s-version-836310 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 30 16:07:10 old-k8s-version-836310 kubelet[7895]: I0630 16:07:10.482338    7895 server.go:416] Version: v1.20.0
	Jun 30 16:07:10 old-k8s-version-836310 kubelet[7895]: I0630 16:07:10.482836    7895 server.go:837] Client rotation is on, will bootstrap in background
	Jun 30 16:07:10 old-k8s-version-836310 kubelet[7895]: I0630 16:07:10.484882    7895 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 30 16:07:10 old-k8s-version-836310 kubelet[7895]: I0630 16:07:10.486179    7895 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jun 30 16:07:10 old-k8s-version-836310 kubelet[7895]: W0630 16:07:10.486302    7895 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 2 (241.968849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-836310" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (353.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:07:46.781085 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:07:57.879785 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:08:48.794506 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:09:00.017504 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:09:21.463034 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:09:48.660910 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:10:20.916582 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:10:44.541816 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:11:04.684100 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 16:11:04.878284 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:11:34.814869 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:11:47.889209 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:12:11.941459 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:12:27.943808 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
E0630 16:12:46.780621 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.88:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.88:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 2 (248.97723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-836310" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-836310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-836310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.986µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-836310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 2 (260.280842ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-836310 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-668101 sudo crio                          | flannel-668101 | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| delete  | -p flannel-668101                                    | flannel-668101 | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo docker                         | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo cat                            | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo                                | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo find                           | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-668101 sudo crio                           | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p bridge-668101                                     | bridge-668101  | jenkins | v1.36.0 | 30 Jun 25 15:54 UTC | 30 Jun 25 15:54 UTC |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 15:52:42
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 15:52:42.950710 1620744 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:52:42.950982 1620744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:52:42.950992 1620744 out.go:358] Setting ErrFile to fd 2...
	I0630 15:52:42.950997 1620744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:52:42.951256 1620744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:52:42.951919 1620744 out.go:352] Setting JSON to false
	I0630 15:52:42.953176 1620744 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":34455,"bootTime":1751264308,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:52:42.953303 1620744 start.go:140] virtualization: kvm guest
	I0630 15:52:42.956113 1620744 out.go:177] * [bridge-668101] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:52:42.957699 1620744 notify.go:220] Checking for updates...
	I0630 15:52:42.957717 1620744 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:52:42.959576 1620744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:52:42.961566 1620744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:52:42.963634 1620744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:52:42.965261 1620744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:52:42.966949 1620744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:52:42.968735 1620744 config.go:182] Loaded profile config "enable-default-cni-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:52:42.968869 1620744 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:52:42.968990 1620744 config.go:182] Loaded profile config "old-k8s-version-836310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0630 15:52:42.969114 1620744 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:52:43.011541 1620744 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 15:52:43.013118 1620744 start.go:304] selected driver: kvm2
	I0630 15:52:43.013145 1620744 start.go:908] validating driver "kvm2" against <nil>
	I0630 15:52:43.013160 1620744 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:52:43.014286 1620744 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:52:43.014403 1620744 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 15:52:43.032217 1620744 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 15:52:43.032283 1620744 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 15:52:43.032559 1620744 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:52:43.032604 1620744 cni.go:84] Creating CNI manager for "bridge"
	I0630 15:52:43.032615 1620744 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 15:52:43.032686 1620744 start.go:347] cluster config:
	{Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:bridge-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0630 15:52:43.032888 1620744 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 15:52:43.035138 1620744 out.go:177] * Starting "bridge-668101" primary control-plane node in "bridge-668101" cluster
	I0630 15:52:41.357269 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:41.358093 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find current IP address of domain flannel-668101 in network mk-flannel-668101
	I0630 15:52:41.358123 1619158 main.go:141] libmachine: (flannel-668101) DBG | I0630 15:52:41.358037 1619189 retry.go:31] will retry after 4.215568728s: waiting for domain to come up
	W0630 15:52:44.159824 1617293 pod_ready.go:104] pod "coredns-674b8bbfcf-6rphx" is not "Ready", error: <nil>
	I0630 15:52:44.656114 1617293 pod_ready.go:99] pod "coredns-674b8bbfcf-6rphx" in "kube-system" namespace is gone: getting pod "coredns-674b8bbfcf-6rphx" in "kube-system" namespace (will retry): pods "coredns-674b8bbfcf-6rphx" not found
	I0630 15:52:44.656143 1617293 pod_ready.go:86] duration metric: took 10.003645641s for pod "coredns-674b8bbfcf-6rphx" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.656159 1617293 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-v5d7m" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.660419 1617293 pod_ready.go:94] pod "coredns-674b8bbfcf-v5d7m" is "Ready"
	I0630 15:52:44.660451 1617293 pod_ready.go:86] duration metric: took 4.285712ms for pod "coredns-674b8bbfcf-v5d7m" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.662598 1617293 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.665846 1617293 pod_ready.go:94] pod "etcd-enable-default-cni-668101" is "Ready"
	I0630 15:52:44.665873 1617293 pod_ready.go:86] duration metric: took 3.248201ms for pod "etcd-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.667505 1617293 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.672030 1617293 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-668101" is "Ready"
	I0630 15:52:44.672060 1617293 pod_ready.go:86] duration metric: took 4.533989ms for pod "kube-apiserver-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:44.673855 1617293 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.057371 1617293 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-668101" is "Ready"
	I0630 15:52:45.057433 1617293 pod_ready.go:86] duration metric: took 383.556453ms for pod "kube-controller-manager-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.257321 1617293 pod_ready.go:83] waiting for pod "kube-proxy-gx8xr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.657721 1617293 pod_ready.go:94] pod "kube-proxy-gx8xr" is "Ready"
	I0630 15:52:45.657765 1617293 pod_ready.go:86] duration metric: took 400.308271ms for pod "kube-proxy-gx8xr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:45.857507 1617293 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:46.256921 1617293 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-668101" is "Ready"
	I0630 15:52:46.256953 1617293 pod_ready.go:86] duration metric: took 399.409105ms for pod "kube-scheduler-enable-default-cni-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:52:46.256970 1617293 pod_ready.go:40] duration metric: took 11.610545265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:52:46.306916 1617293 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:52:46.308982 1617293 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-668101" cluster and "default" namespace by default
	W0630 15:52:42.720632 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:42.720657 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:42.720672 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:42.805318 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:42.805369 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:45.356097 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:45.375177 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:45.375249 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:45.411531 1612198 cri.go:89] found id: ""
	I0630 15:52:45.411573 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.411585 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:45.411594 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:45.411670 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:45.446010 1612198 cri.go:89] found id: ""
	I0630 15:52:45.446040 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.446049 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:45.446055 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:45.446126 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:45.483165 1612198 cri.go:89] found id: ""
	I0630 15:52:45.483213 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.483225 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:45.483234 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:45.483309 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:45.519693 1612198 cri.go:89] found id: ""
	I0630 15:52:45.519724 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.519732 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:45.519739 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:45.519813 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:45.554863 1612198 cri.go:89] found id: ""
	I0630 15:52:45.554902 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.554913 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:45.554921 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:45.555000 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:45.590429 1612198 cri.go:89] found id: ""
	I0630 15:52:45.590460 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.590469 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:45.590476 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:45.590545 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:45.625876 1612198 cri.go:89] found id: ""
	I0630 15:52:45.625914 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.625927 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:45.625935 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:45.626002 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:45.663157 1612198 cri.go:89] found id: ""
	I0630 15:52:45.663188 1612198 logs.go:282] 0 containers: []
	W0630 15:52:45.663197 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:45.663210 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:45.663227 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:45.717765 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:45.717817 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:45.731782 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:45.731815 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:45.798057 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:45.798090 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:45.798106 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:45.878867 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:45.878917 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:43.036635 1620744 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:52:43.036694 1620744 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 15:52:43.036707 1620744 cache.go:56] Caching tarball of preloaded images
	I0630 15:52:43.036821 1620744 preload.go:172] Found /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0630 15:52:43.036837 1620744 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on crio
	I0630 15:52:43.036964 1620744 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/config.json ...
	I0630 15:52:43.036993 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/config.json: {Name:mke71cd9af919bb85465b3e686b56c4cd0e1c7fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:52:43.037185 1620744 start.go:360] acquireMachinesLock for bridge-668101: {Name:mk94f28e6e139ddc13f15a3e4e4c9e62d9548530 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 15:52:45.576190 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:45.576849 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find current IP address of domain flannel-668101 in network mk-flannel-668101
	I0630 15:52:45.576874 1619158 main.go:141] libmachine: (flannel-668101) DBG | I0630 15:52:45.576802 1619189 retry.go:31] will retry after 5.00816622s: waiting for domain to come up
	I0630 15:52:48.422047 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:48.441634 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:48.441712 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:48.482676 1612198 cri.go:89] found id: ""
	I0630 15:52:48.482706 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.482714 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:48.482721 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:48.482781 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:48.523604 1612198 cri.go:89] found id: ""
	I0630 15:52:48.523645 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.523659 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:48.523669 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:48.523740 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:48.566545 1612198 cri.go:89] found id: ""
	I0630 15:52:48.566576 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.566588 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:48.566595 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:48.566667 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:48.602166 1612198 cri.go:89] found id: ""
	I0630 15:52:48.602204 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.602219 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:48.602228 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:48.602296 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:48.645664 1612198 cri.go:89] found id: ""
	I0630 15:52:48.645701 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.645712 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:48.645724 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:48.645796 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:48.689364 1612198 cri.go:89] found id: ""
	I0630 15:52:48.689437 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.689449 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:48.689457 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:48.689532 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:48.727484 1612198 cri.go:89] found id: ""
	I0630 15:52:48.727594 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.727614 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:48.727623 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:48.727695 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:48.765617 1612198 cri.go:89] found id: ""
	I0630 15:52:48.765649 1612198 logs.go:282] 0 containers: []
	W0630 15:52:48.765662 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:48.765676 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:48.765696 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:48.832480 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:48.832525 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:48.851001 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:48.851033 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:48.935090 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:48.935117 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:48.935139 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:49.020511 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:49.020556 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:51.569582 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:51.586531 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:51.586608 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:51.623986 1612198 cri.go:89] found id: ""
	I0630 15:52:51.624022 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.624034 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:51.624041 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:51.624097 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:51.660234 1612198 cri.go:89] found id: ""
	I0630 15:52:51.660289 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.660311 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:51.660321 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:51.660396 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:51.694392 1612198 cri.go:89] found id: ""
	I0630 15:52:51.694421 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.694431 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:51.694439 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:51.694509 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:51.733636 1612198 cri.go:89] found id: ""
	I0630 15:52:51.733679 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.733692 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:51.733700 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:51.733767 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:51.770073 1612198 cri.go:89] found id: ""
	I0630 15:52:51.770105 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.770116 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:51.770125 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:51.770193 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:51.806054 1612198 cri.go:89] found id: ""
	I0630 15:52:51.806082 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.806096 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:51.806105 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:51.806166 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:51.844220 1612198 cri.go:89] found id: ""
	I0630 15:52:51.844253 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.844263 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:51.844270 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:51.844337 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:51.879139 1612198 cri.go:89] found id: ""
	I0630 15:52:51.879180 1612198 logs.go:282] 0 containers: []
	W0630 15:52:51.879192 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:51.879206 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:51.879225 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:51.959131 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:51.959178 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:51.999852 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:51.999898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:52.054538 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:52.054586 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:52.068544 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:52.068582 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:52.141184 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:50.586392 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:50.586877 1619158 main.go:141] libmachine: (flannel-668101) found domain IP: 192.168.50.164
	I0630 15:52:50.586929 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has current primary IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:50.586951 1619158 main.go:141] libmachine: (flannel-668101) reserving static IP address...
	I0630 15:52:50.587266 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find host DHCP lease matching {name: "flannel-668101", mac: "52:54:00:d0:56:26", ip: "192.168.50.164"} in network mk-flannel-668101
	I0630 15:52:50.692673 1619158 main.go:141] libmachine: (flannel-668101) DBG | Getting to WaitForSSH function...
	I0630 15:52:50.692714 1619158 main.go:141] libmachine: (flannel-668101) reserved static IP address 192.168.50.164 for domain flannel-668101
	I0630 15:52:50.692729 1619158 main.go:141] libmachine: (flannel-668101) waiting for SSH...
	I0630 15:52:50.695660 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:50.696050 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101
	I0630 15:52:50.696074 1619158 main.go:141] libmachine: (flannel-668101) DBG | unable to find defined IP address of network mk-flannel-668101 interface with MAC address 52:54:00:d0:56:26
	I0630 15:52:50.696281 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH client type: external
	I0630 15:52:50.696306 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa (-rw-------)
	I0630 15:52:50.696335 1619158 main.go:141] libmachine: (flannel-668101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:52:50.696364 1619158 main.go:141] libmachine: (flannel-668101) DBG | About to run SSH command:
	I0630 15:52:50.696404 1619158 main.go:141] libmachine: (flannel-668101) DBG | exit 0
	I0630 15:52:50.701524 1619158 main.go:141] libmachine: (flannel-668101) DBG | SSH cmd err, output: exit status 255: 
	I0630 15:52:50.701550 1619158 main.go:141] libmachine: (flannel-668101) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0630 15:52:50.701561 1619158 main.go:141] libmachine: (flannel-668101) DBG | command : exit 0
	I0630 15:52:50.701568 1619158 main.go:141] libmachine: (flannel-668101) DBG | err     : exit status 255
	I0630 15:52:50.701579 1619158 main.go:141] libmachine: (flannel-668101) DBG | output  : 
	I0630 15:52:53.701789 1619158 main.go:141] libmachine: (flannel-668101) DBG | Getting to WaitForSSH function...
	I0630 15:52:53.704360 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.704932 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:53.704962 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.705130 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH client type: external
	I0630 15:52:53.705161 1619158 main.go:141] libmachine: (flannel-668101) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa (-rw-------)
	I0630 15:52:53.705186 1619158 main.go:141] libmachine: (flannel-668101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:52:53.705196 1619158 main.go:141] libmachine: (flannel-668101) DBG | About to run SSH command:
	I0630 15:52:53.705216 1619158 main.go:141] libmachine: (flannel-668101) DBG | exit 0
	I0630 15:52:53.830137 1619158 main.go:141] libmachine: (flannel-668101) DBG | SSH cmd err, output: <nil>: 
	I0630 15:52:53.830489 1619158 main.go:141] libmachine: (flannel-668101) KVM machine creation complete
	I0630 15:52:53.831158 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetConfigRaw
	I0630 15:52:53.831811 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:53.832305 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:53.832539 1619158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:52:53.832558 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:52:53.834243 1619158 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:52:53.834258 1619158 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:52:53.834264 1619158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:52:53.834269 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:53.837692 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.838098 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:53.838132 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.838367 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:53.838567 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.838712 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.838827 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:53.838973 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:53.839228 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:53.839240 1619158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:52:53.941129 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:52:53.941166 1619158 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:52:53.941179 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:53.945852 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.946724 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:53.946789 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:53.947156 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:53.947488 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.947724 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:53.947876 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:53.948105 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:53.948402 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:53.948418 1619158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:52:54.054669 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:52:54.054748 1619158 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:52:54.054758 1619158 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:52:54.054767 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetMachineName
	I0630 15:52:54.055102 1619158 buildroot.go:166] provisioning hostname "flannel-668101"
	I0630 15:52:54.055132 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetMachineName
	I0630 15:52:54.055454 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:54.059064 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.059471 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.059502 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.059708 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:54.059899 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.060070 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.060224 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:54.060393 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:54.060624 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:54.060640 1619158 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-668101 && echo "flannel-668101" | sudo tee /etc/hostname
	I0630 15:52:54.177979 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-668101
	
	I0630 15:52:54.178018 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:54.181025 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.181363 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.181395 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.181596 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:54.181838 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.182126 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:54.182320 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:54.182493 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:54.182708 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:54.182725 1619158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-668101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-668101/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-668101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:52:54.297007 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:52:54.297044 1619158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:52:54.297108 1619158 buildroot.go:174] setting up certificates
	I0630 15:52:54.297155 1619158 provision.go:84] configureAuth start
	I0630 15:52:54.297174 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetMachineName
	I0630 15:52:54.297629 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:54.300624 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.300972 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.301001 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.301156 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:54.303586 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.303998 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:54.304030 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:54.304173 1619158 provision.go:143] copyHostCerts
	I0630 15:52:54.304256 1619158 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:52:54.304278 1619158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:52:54.304353 1619158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:52:54.304508 1619158 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:52:54.304518 1619158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:52:54.304545 1619158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:52:54.304611 1619158 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:52:54.304618 1619158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:52:54.304640 1619158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:52:54.304715 1619158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.flannel-668101 san=[127.0.0.1 192.168.50.164 flannel-668101 localhost minikube]
	I0630 15:52:55.093359 1619158 provision.go:177] copyRemoteCerts
	I0630 15:52:55.093451 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:52:55.093490 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.096608 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.097063 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.097100 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.097382 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.097605 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.097804 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.097967 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.181657 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:52:55.212265 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0630 15:52:55.244844 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:52:55.279323 1619158 provision.go:87] duration metric: took 982.144024ms to configureAuth
	I0630 15:52:55.279365 1619158 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:52:55.279616 1619158 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:52:55.279709 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.283643 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.284181 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.284211 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.284404 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.284627 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.284847 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.285000 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.285212 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:55.285583 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:55.285612 1619158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:52:55.778589 1620744 start.go:364] duration metric: took 12.741358919s to acquireMachinesLock for "bridge-668101"
	I0630 15:52:55.778680 1620744 start.go:93] Provisioning new machine with config: &{Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:bridge-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:52:55.778835 1620744 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 15:52:55.530045 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:52:55.530104 1619158 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:52:55.530116 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetURL
	I0630 15:52:55.531952 1619158 main.go:141] libmachine: (flannel-668101) DBG | using libvirt version 6000000
	I0630 15:52:55.534427 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.534823 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.534843 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.535146 1619158 main.go:141] libmachine: Docker is up and running!
	I0630 15:52:55.535159 1619158 main.go:141] libmachine: Reticulating splines...
	I0630 15:52:55.535167 1619158 client.go:171] duration metric: took 30.008807578s to LocalClient.Create
	I0630 15:52:55.535196 1619158 start.go:167] duration metric: took 30.008887821s to libmachine.API.Create "flannel-668101"
	I0630 15:52:55.535211 1619158 start.go:293] postStartSetup for "flannel-668101" (driver="kvm2")
	I0630 15:52:55.535279 1619158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:52:55.535323 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.535615 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:52:55.535648 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.538056 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.538461 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.538505 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.538621 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.538865 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.539071 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.539281 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.621263 1619158 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:52:55.626036 1619158 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:52:55.626073 1619158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:52:55.626186 1619158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:52:55.626347 1619158 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:52:55.626445 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:52:55.637649 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:52:55.667310 1619158 start.go:296] duration metric: took 132.08213ms for postStartSetup
	I0630 15:52:55.667372 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetConfigRaw
	I0630 15:52:55.668073 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:55.671293 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.671868 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.671903 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.672201 1619158 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/config.json ...
	I0630 15:52:55.672423 1619158 start.go:128] duration metric: took 30.167785685s to createHost
	I0630 15:52:55.672451 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.674800 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.675142 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.675174 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.675451 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.675643 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.675788 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.676031 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.676253 1619158 main.go:141] libmachine: Using SSH client type: native
	I0630 15:52:55.676551 1619158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.50.164 22 <nil> <nil>}
	I0630 15:52:55.676567 1619158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:52:55.778402 1619158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298775.758912603
	
	I0630 15:52:55.778427 1619158 fix.go:216] guest clock: 1751298775.758912603
	I0630 15:52:55.778435 1619158 fix.go:229] Guest: 2025-06-30 15:52:55.758912603 +0000 UTC Remote: 2025-06-30 15:52:55.67243923 +0000 UTC m=+30.329704815 (delta=86.473373ms)
	I0630 15:52:55.778459 1619158 fix.go:200] guest clock delta is within tolerance: 86.473373ms
	I0630 15:52:55.778466 1619158 start.go:83] releasing machines lock for "flannel-668101", held for 30.273912922s
	I0630 15:52:55.778518 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.778846 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:55.782021 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.782499 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.782533 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.782737 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.783225 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.783481 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:52:55.783595 1619158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:52:55.783641 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.783703 1619158 ssh_runner.go:195] Run: cat /version.json
	I0630 15:52:55.783731 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:52:55.786539 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.786668 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.786964 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.786995 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.787022 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:55.787034 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:55.787195 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.787318 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:52:55.787429 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.787516 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:52:55.787627 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.787712 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:52:55.787790 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.787848 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:52:55.874997 1619158 ssh_runner.go:195] Run: systemctl --version
	I0630 15:52:55.904909 1619158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:52:56.070066 1619158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:52:56.076773 1619158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:52:56.076855 1619158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:52:56.096159 1619158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:52:56.096192 1619158 start.go:495] detecting cgroup driver to use...
	I0630 15:52:56.096267 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:52:56.116203 1619158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:52:56.134008 1619158 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:52:56.134070 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:52:56.150561 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:52:56.166862 1619158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:52:56.306622 1619158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:52:56.473344 1619158 docker.go:246] disabling docker service ...
	I0630 15:52:56.473467 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:52:56.490252 1619158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:52:56.505665 1619158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:52:56.705455 1619158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:52:56.856676 1619158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:52:56.873735 1619158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:52:56.897728 1619158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:52:56.897807 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.909980 1619158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:52:56.910087 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.921206 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.932511 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.943614 1619158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:52:56.956362 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.968071 1619158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.987887 1619158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:52:56.999240 1619158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:52:57.009535 1619158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:52:57.009612 1619158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:52:57.024825 1619158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:52:57.035690 1619158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:52:57.175638 1619158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:52:57.278362 1619158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:52:57.278504 1619158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:52:57.285443 1619158 start.go:563] Will wait 60s for crictl version
	I0630 15:52:57.285511 1619158 ssh_runner.go:195] Run: which crictl
	I0630 15:52:57.289297 1619158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:52:57.341170 1619158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:52:57.341278 1619158 ssh_runner.go:195] Run: crio --version
	I0630 15:52:57.370996 1619158 ssh_runner.go:195] Run: crio --version
	I0630 15:52:57.408719 1619158 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:52:54.642061 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:54.657561 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:54.657631 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:54.699127 1612198 cri.go:89] found id: ""
	I0630 15:52:54.699156 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.699165 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:54.699172 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:54.699249 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:54.743537 1612198 cri.go:89] found id: ""
	I0630 15:52:54.743582 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.743595 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:54.743604 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:54.743691 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:54.793655 1612198 cri.go:89] found id: ""
	I0630 15:52:54.793692 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.793705 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:54.793714 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:54.793789 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:54.836404 1612198 cri.go:89] found id: ""
	I0630 15:52:54.836439 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.836450 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:54.836458 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:54.836530 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:54.881834 1612198 cri.go:89] found id: ""
	I0630 15:52:54.881866 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.881874 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:54.881881 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:54.881945 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:54.920907 1612198 cri.go:89] found id: ""
	I0630 15:52:54.920937 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.920945 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:54.920952 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:54.921019 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:54.964724 1612198 cri.go:89] found id: ""
	I0630 15:52:54.964777 1612198 logs.go:282] 0 containers: []
	W0630 15:52:54.964790 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:54.964799 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:54.964877 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:55.000611 1612198 cri.go:89] found id: ""
	I0630 15:52:55.000646 1612198 logs.go:282] 0 containers: []
	W0630 15:52:55.000654 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:55.000665 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:55.000678 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:55.075252 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:55.075285 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:55.075306 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:55.162081 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:55.162133 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:55.226240 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:55.226277 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:55.297365 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:55.297429 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:52:55.781091 1620744 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0630 15:52:55.781346 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:52:55.781446 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:52:55.799943 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38375
	I0630 15:52:55.800489 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:52:55.801103 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:52:55.801134 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:52:55.801483 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:52:55.801678 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:52:55.801826 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:52:55.802012 1620744 start.go:159] libmachine.API.Create for "bridge-668101" (driver="kvm2")
	I0630 15:52:55.802045 1620744 client.go:168] LocalClient.Create starting
	I0630 15:52:55.802082 1620744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem
	I0630 15:52:55.802123 1620744 main.go:141] libmachine: Decoding PEM data...
	I0630 15:52:55.802145 1620744 main.go:141] libmachine: Parsing certificate...
	I0630 15:52:55.802228 1620744 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem
	I0630 15:52:55.802259 1620744 main.go:141] libmachine: Decoding PEM data...
	I0630 15:52:55.802275 1620744 main.go:141] libmachine: Parsing certificate...
	I0630 15:52:55.802328 1620744 main.go:141] libmachine: Running pre-create checks...
	I0630 15:52:55.802341 1620744 main.go:141] libmachine: (bridge-668101) Calling .PreCreateCheck
	I0630 15:52:55.802728 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetConfigRaw
	I0630 15:52:55.803114 1620744 main.go:141] libmachine: Creating machine...
	I0630 15:52:55.803131 1620744 main.go:141] libmachine: (bridge-668101) Calling .Create
	I0630 15:52:55.803562 1620744 main.go:141] libmachine: (bridge-668101) creating KVM machine...
	I0630 15:52:55.803587 1620744 main.go:141] libmachine: (bridge-668101) creating network...
	I0630 15:52:55.805278 1620744 main.go:141] libmachine: (bridge-668101) DBG | found existing default KVM network
	I0630 15:52:55.806568 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.806371 1620899 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2c:4b:58} reservation:<nil>}
	I0630 15:52:55.807384 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.807300 1620899 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:46:29:de} reservation:<nil>}
	I0630 15:52:55.808183 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.808055 1620899 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:da:d8:99} reservation:<nil>}
	I0630 15:52:55.809357 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.809236 1620899 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002cac60}
	I0630 15:52:55.809380 1620744 main.go:141] libmachine: (bridge-668101) DBG | created network xml: 
	I0630 15:52:55.809386 1620744 main.go:141] libmachine: (bridge-668101) DBG | <network>
	I0630 15:52:55.809392 1620744 main.go:141] libmachine: (bridge-668101) DBG |   <name>mk-bridge-668101</name>
	I0630 15:52:55.809397 1620744 main.go:141] libmachine: (bridge-668101) DBG |   <dns enable='no'/>
	I0630 15:52:55.809425 1620744 main.go:141] libmachine: (bridge-668101) DBG |   
	I0630 15:52:55.809435 1620744 main.go:141] libmachine: (bridge-668101) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0630 15:52:55.809443 1620744 main.go:141] libmachine: (bridge-668101) DBG |     <dhcp>
	I0630 15:52:55.809449 1620744 main.go:141] libmachine: (bridge-668101) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0630 15:52:55.809456 1620744 main.go:141] libmachine: (bridge-668101) DBG |     </dhcp>
	I0630 15:52:55.809476 1620744 main.go:141] libmachine: (bridge-668101) DBG |   </ip>
	I0630 15:52:55.809495 1620744 main.go:141] libmachine: (bridge-668101) DBG |   
	I0630 15:52:55.809501 1620744 main.go:141] libmachine: (bridge-668101) DBG | </network>
	I0630 15:52:55.809510 1620744 main.go:141] libmachine: (bridge-668101) DBG | 
	I0630 15:52:55.815963 1620744 main.go:141] libmachine: (bridge-668101) DBG | trying to create private KVM network mk-bridge-668101 192.168.72.0/24...
	I0630 15:52:55.898159 1620744 main.go:141] libmachine: (bridge-668101) setting up store path in /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101 ...
	I0630 15:52:55.898202 1620744 main.go:141] libmachine: (bridge-668101) building disk image from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 15:52:55.898214 1620744 main.go:141] libmachine: (bridge-668101) DBG | private KVM network mk-bridge-668101 192.168.72.0/24 created
	I0630 15:52:55.898234 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:55.898059 1620899 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:52:55.898373 1620744 main.go:141] libmachine: (bridge-668101) Downloading /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 15:52:56.221476 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:56.221233 1620899 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa...
	I0630 15:52:56.640944 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:56.640745 1620899 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/bridge-668101.rawdisk...
	I0630 15:52:56.640998 1620744 main.go:141] libmachine: (bridge-668101) DBG | Writing magic tar header
	I0630 15:52:56.641019 1620744 main.go:141] libmachine: (bridge-668101) DBG | Writing SSH key tar header
	I0630 15:52:56.641031 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:56.640908 1620899 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101 ...
	I0630 15:52:56.641054 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101
	I0630 15:52:56.641093 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101 (perms=drwx------)
	I0630 15:52:56.641214 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines
	I0630 15:52:56.641244 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:52:56.641260 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube/machines (perms=drwxr-xr-x)
	I0630 15:52:56.641272 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1550299
	I0630 15:52:56.641286 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 15:52:56.641298 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home/jenkins
	I0630 15:52:56.641308 1620744 main.go:141] libmachine: (bridge-668101) DBG | checking permissions on dir: /home
	I0630 15:52:56.641320 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299/.minikube (perms=drwxr-xr-x)
	I0630 15:52:56.641331 1620744 main.go:141] libmachine: (bridge-668101) DBG | skipping /home - not owner
	I0630 15:52:56.641357 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration/20991-1550299 (perms=drwxrwxr-x)
	I0630 15:52:56.641377 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 15:52:56.641386 1620744 main.go:141] libmachine: (bridge-668101) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 15:52:56.641397 1620744 main.go:141] libmachine: (bridge-668101) creating domain...
	I0630 15:52:56.642571 1620744 main.go:141] libmachine: (bridge-668101) define libvirt domain using xml: 
	I0630 15:52:56.642602 1620744 main.go:141] libmachine: (bridge-668101) <domain type='kvm'>
	I0630 15:52:56.642633 1620744 main.go:141] libmachine: (bridge-668101)   <name>bridge-668101</name>
	I0630 15:52:56.642652 1620744 main.go:141] libmachine: (bridge-668101)   <memory unit='MiB'>3072</memory>
	I0630 15:52:56.642667 1620744 main.go:141] libmachine: (bridge-668101)   <vcpu>2</vcpu>
	I0630 15:52:56.642691 1620744 main.go:141] libmachine: (bridge-668101)   <features>
	I0630 15:52:56.642705 1620744 main.go:141] libmachine: (bridge-668101)     <acpi/>
	I0630 15:52:56.642713 1620744 main.go:141] libmachine: (bridge-668101)     <apic/>
	I0630 15:52:56.642725 1620744 main.go:141] libmachine: (bridge-668101)     <pae/>
	I0630 15:52:56.642745 1620744 main.go:141] libmachine: (bridge-668101)     
	I0630 15:52:56.642783 1620744 main.go:141] libmachine: (bridge-668101)   </features>
	I0630 15:52:56.642806 1620744 main.go:141] libmachine: (bridge-668101)   <cpu mode='host-passthrough'>
	I0630 15:52:56.642838 1620744 main.go:141] libmachine: (bridge-668101)   
	I0630 15:52:56.642863 1620744 main.go:141] libmachine: (bridge-668101)   </cpu>
	I0630 15:52:56.642880 1620744 main.go:141] libmachine: (bridge-668101)   <os>
	I0630 15:52:56.642900 1620744 main.go:141] libmachine: (bridge-668101)     <type>hvm</type>
	I0630 15:52:56.642914 1620744 main.go:141] libmachine: (bridge-668101)     <boot dev='cdrom'/>
	I0630 15:52:56.642925 1620744 main.go:141] libmachine: (bridge-668101)     <boot dev='hd'/>
	I0630 15:52:56.642944 1620744 main.go:141] libmachine: (bridge-668101)     <bootmenu enable='no'/>
	I0630 15:52:56.642956 1620744 main.go:141] libmachine: (bridge-668101)   </os>
	I0630 15:52:56.642969 1620744 main.go:141] libmachine: (bridge-668101)   <devices>
	I0630 15:52:56.642980 1620744 main.go:141] libmachine: (bridge-668101)     <disk type='file' device='cdrom'>
	I0630 15:52:56.642999 1620744 main.go:141] libmachine: (bridge-668101)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/boot2docker.iso'/>
	I0630 15:52:56.643011 1620744 main.go:141] libmachine: (bridge-668101)       <target dev='hdc' bus='scsi'/>
	I0630 15:52:56.643025 1620744 main.go:141] libmachine: (bridge-668101)       <readonly/>
	I0630 15:52:56.643041 1620744 main.go:141] libmachine: (bridge-668101)     </disk>
	I0630 15:52:56.643059 1620744 main.go:141] libmachine: (bridge-668101)     <disk type='file' device='disk'>
	I0630 15:52:56.643073 1620744 main.go:141] libmachine: (bridge-668101)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 15:52:56.643102 1620744 main.go:141] libmachine: (bridge-668101)       <source file='/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/bridge-668101.rawdisk'/>
	I0630 15:52:56.643114 1620744 main.go:141] libmachine: (bridge-668101)       <target dev='hda' bus='virtio'/>
	I0630 15:52:56.643122 1620744 main.go:141] libmachine: (bridge-668101)     </disk>
	I0630 15:52:56.643135 1620744 main.go:141] libmachine: (bridge-668101)     <interface type='network'>
	I0630 15:52:56.643147 1620744 main.go:141] libmachine: (bridge-668101)       <source network='mk-bridge-668101'/>
	I0630 15:52:56.643170 1620744 main.go:141] libmachine: (bridge-668101)       <model type='virtio'/>
	I0630 15:52:56.643189 1620744 main.go:141] libmachine: (bridge-668101)     </interface>
	I0630 15:52:56.643202 1620744 main.go:141] libmachine: (bridge-668101)     <interface type='network'>
	I0630 15:52:56.643213 1620744 main.go:141] libmachine: (bridge-668101)       <source network='default'/>
	I0630 15:52:56.643225 1620744 main.go:141] libmachine: (bridge-668101)       <model type='virtio'/>
	I0630 15:52:56.643235 1620744 main.go:141] libmachine: (bridge-668101)     </interface>
	I0630 15:52:56.643244 1620744 main.go:141] libmachine: (bridge-668101)     <serial type='pty'>
	I0630 15:52:56.643254 1620744 main.go:141] libmachine: (bridge-668101)       <target port='0'/>
	I0630 15:52:56.643269 1620744 main.go:141] libmachine: (bridge-668101)     </serial>
	I0630 15:52:56.643284 1620744 main.go:141] libmachine: (bridge-668101)     <console type='pty'>
	I0630 15:52:56.643297 1620744 main.go:141] libmachine: (bridge-668101)       <target type='serial' port='0'/>
	I0630 15:52:56.643307 1620744 main.go:141] libmachine: (bridge-668101)     </console>
	I0630 15:52:56.643318 1620744 main.go:141] libmachine: (bridge-668101)     <rng model='virtio'>
	I0630 15:52:56.643330 1620744 main.go:141] libmachine: (bridge-668101)       <backend model='random'>/dev/random</backend>
	I0630 15:52:56.643341 1620744 main.go:141] libmachine: (bridge-668101)     </rng>
	I0630 15:52:56.643348 1620744 main.go:141] libmachine: (bridge-668101)     
	I0630 15:52:56.643370 1620744 main.go:141] libmachine: (bridge-668101)     
	I0630 15:52:56.643393 1620744 main.go:141] libmachine: (bridge-668101)   </devices>
	I0630 15:52:56.643405 1620744 main.go:141] libmachine: (bridge-668101) </domain>
	I0630 15:52:56.643415 1620744 main.go:141] libmachine: (bridge-668101) 
	I0630 15:52:56.648384 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:c9:a1:4d in network default
	I0630 15:52:56.649121 1620744 main.go:141] libmachine: (bridge-668101) starting domain...
	I0630 15:52:56.649143 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:56.649148 1620744 main.go:141] libmachine: (bridge-668101) ensuring networks are active...
	I0630 15:52:56.649950 1620744 main.go:141] libmachine: (bridge-668101) Ensuring network default is active
	I0630 15:52:56.650256 1620744 main.go:141] libmachine: (bridge-668101) Ensuring network mk-bridge-668101 is active
	I0630 15:52:56.650853 1620744 main.go:141] libmachine: (bridge-668101) getting domain XML...
	I0630 15:52:56.651713 1620744 main.go:141] libmachine: (bridge-668101) creating domain...
	I0630 15:52:57.410163 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetIP
	I0630 15:52:57.414146 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:57.414618 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:52:57.414653 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:52:57.414941 1619158 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0630 15:52:57.419663 1619158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:52:57.434011 1619158 kubeadm.go:875] updating cluster {Name:flannel-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:flannel-6
68101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:52:57.434146 1619158 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:52:57.434191 1619158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:52:57.470291 1619158 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 15:52:57.470364 1619158 ssh_runner.go:195] Run: which lz4
	I0630 15:52:57.475237 1619158 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:52:57.480568 1619158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:52:57.480607 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 15:52:59.283095 1619158 crio.go:462] duration metric: took 1.807899896s to copy over tarball
	I0630 15:52:59.283202 1619158 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:52:57.821154 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:52:57.853607 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:52:57.853696 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:52:57.914164 1612198 cri.go:89] found id: ""
	I0630 15:52:57.914210 1612198 logs.go:282] 0 containers: []
	W0630 15:52:57.914227 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:52:57.914246 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:52:57.914347 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:52:57.987318 1612198 cri.go:89] found id: ""
	I0630 15:52:57.987351 1612198 logs.go:282] 0 containers: []
	W0630 15:52:57.987366 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:52:57.987377 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:52:57.987457 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:52:58.079419 1612198 cri.go:89] found id: ""
	I0630 15:52:58.079447 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.079455 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:52:58.079462 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:52:58.079527 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:52:58.159322 1612198 cri.go:89] found id: ""
	I0630 15:52:58.159364 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.159376 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:52:58.159385 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:52:58.159456 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:52:58.214549 1612198 cri.go:89] found id: ""
	I0630 15:52:58.214589 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.214605 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:52:58.214614 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:52:58.214688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:52:58.268709 1612198 cri.go:89] found id: ""
	I0630 15:52:58.268743 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.268755 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:52:58.268764 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:52:58.268865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:52:58.336282 1612198 cri.go:89] found id: ""
	I0630 15:52:58.336316 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.336327 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:52:58.336335 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:52:58.336411 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:52:58.385539 1612198 cri.go:89] found id: ""
	I0630 15:52:58.385568 1612198 logs.go:282] 0 containers: []
	W0630 15:52:58.385577 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:52:58.385587 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:52:58.385600 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:52:58.490925 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:52:58.490953 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:52:58.490966 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:58.595534 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:52:58.595636 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:52:58.670912 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:52:58.670947 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:52:58.746686 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:52:58.746777 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:01.264137 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:01.286226 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:01.286330 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:01.365280 1612198 cri.go:89] found id: ""
	I0630 15:53:01.365314 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.365328 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:01.365336 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:01.365446 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:01.416551 1612198 cri.go:89] found id: ""
	I0630 15:53:01.416609 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.416628 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:01.416639 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:01.416760 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:01.466901 1612198 cri.go:89] found id: ""
	I0630 15:53:01.466951 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.466968 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:01.466992 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:01.467076 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:01.515958 1612198 cri.go:89] found id: ""
	I0630 15:53:01.516004 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.516018 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:01.516026 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:01.516100 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:01.556162 1612198 cri.go:89] found id: ""
	I0630 15:53:01.556199 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.556212 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:01.556220 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:01.556294 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:01.596633 1612198 cri.go:89] found id: ""
	I0630 15:53:01.596668 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.596681 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:01.596701 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:01.596767 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:01.643515 1612198 cri.go:89] found id: ""
	I0630 15:53:01.643544 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.643553 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:01.643560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:01.643623 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:01.688673 1612198 cri.go:89] found id: ""
	I0630 15:53:01.688716 1612198 logs.go:282] 0 containers: []
	W0630 15:53:01.688730 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:01.688746 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:01.688763 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:01.732854 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:01.732887 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:01.792838 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:01.792898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:01.809743 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:01.809803 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:01.893975 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:01.894006 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:01.894020 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:52:58.300955 1620744 main.go:141] libmachine: (bridge-668101) waiting for IP...
	I0630 15:52:58.302501 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:58.303671 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:58.303696 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:58.303566 1620899 retry.go:31] will retry after 218.695917ms: waiting for domain to come up
	I0630 15:52:58.524255 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:58.525158 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:58.525190 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:58.525070 1620899 retry.go:31] will retry after 355.788445ms: waiting for domain to come up
	I0630 15:52:58.882797 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:58.883330 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:58.883352 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:58.883258 1620899 retry.go:31] will retry after 433.916696ms: waiting for domain to come up
	I0630 15:52:59.319443 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:59.320277 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:59.320312 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:59.320255 1620899 retry.go:31] will retry after 591.607748ms: waiting for domain to come up
	I0630 15:52:59.914140 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:52:59.914771 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:52:59.914833 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:52:59.914762 1620899 retry.go:31] will retry after 653.936151ms: waiting for domain to come up
	I0630 15:53:00.571061 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:00.571855 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:00.571885 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:00.571800 1620899 retry.go:31] will retry after 843.188018ms: waiting for domain to come up
	I0630 15:53:01.416477 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:01.417384 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:01.417447 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:01.417320 1620899 retry.go:31] will retry after 766.048685ms: waiting for domain to come up
	I0630 15:53:02.185256 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:02.185660 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:02.185690 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:02.185641 1620899 retry.go:31] will retry after 1.410798952s: waiting for domain to come up
	I0630 15:53:01.524921 1619158 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241677784s)
	I0630 15:53:01.524971 1619158 crio.go:469] duration metric: took 2.241824009s to extract the tarball
	I0630 15:53:01.524981 1619158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:53:01.580282 1619158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:53:01.626979 1619158 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:53:01.627012 1619158 cache_images.go:84] Images are preloaded, skipping loading
	I0630 15:53:01.627022 1619158 kubeadm.go:926] updating node { 192.168.50.164 8443 v1.33.2 crio true true} ...
	I0630 15:53:01.627165 1619158 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-668101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:flannel-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0630 15:53:01.627252 1619158 ssh_runner.go:195] Run: crio config
	I0630 15:53:01.702008 1619158 cni.go:84] Creating CNI manager for "flannel"
	I0630 15:53:01.702063 1619158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:53:01.702098 1619158 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.164 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-668101 NodeName:flannel-668101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:53:01.702303 1619158 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-668101"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.164"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.164"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:53:01.702411 1619158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 15:53:01.715795 1619158 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:53:01.715889 1619158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:53:01.729847 1619158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0630 15:53:01.752217 1619158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:53:01.775084 1619158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0630 15:53:01.796311 1619158 ssh_runner.go:195] Run: grep 192.168.50.164	control-plane.minikube.internal$ /etc/hosts
	I0630 15:53:01.801900 1619158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:53:01.819789 1619158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:01.986382 1619158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:02.019955 1619158 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101 for IP: 192.168.50.164
	I0630 15:53:02.019984 1619158 certs.go:194] generating shared ca certs ...
	I0630 15:53:02.020008 1619158 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.020252 1619158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:53:02.020336 1619158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:53:02.020356 1619158 certs.go:256] generating profile certs ...
	I0630 15:53:02.020447 1619158 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.key
	I0630 15:53:02.020471 1619158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt with IP's: []
	I0630 15:53:02.580979 1619158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt ...
	I0630 15:53:02.581014 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.crt: {Name:mk57dc79d0a2f5ced3dc3dbf5df60db658cd128d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.581193 1619158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.key ...
	I0630 15:53:02.581204 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/client.key: {Name:mkc12787b7a2e7f85b5efc0fe2ad3bd4bb3a36c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.581279 1619158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315
	I0630 15:53:02.581294 1619158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.164]
	I0630 15:53:02.891830 1619158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315 ...
	I0630 15:53:02.891864 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315: {Name:mk4a3b251c65c4f6336605ebde0fd2b6394224cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.892035 1619158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315 ...
	I0630 15:53:02.892047 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315: {Name:mkfdd1175258bc2f41de0b5ea2ff2aa4d2ba1824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:02.892138 1619158 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt.ba41c315 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt
	I0630 15:53:02.892212 1619158 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key.ba41c315 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key
	I0630 15:53:02.892263 1619158 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key
	I0630 15:53:02.892288 1619158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt with IP's: []
	I0630 15:53:03.110294 1619158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt ...
	I0630 15:53:03.110338 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt: {Name:mk5f2a1c5ffd32a7751cdaa24de023db01340134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:03.110558 1619158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key ...
	I0630 15:53:03.110576 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key: {Name:mk75d7060f89bcef318a4de6ba9f3f077d54a76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:03.110779 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:53:03.110831 1619158 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:53:03.110847 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:53:03.110885 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:53:03.110918 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:53:03.110952 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:53:03.111006 1619158 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:53:03.111669 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:53:03.143651 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:53:03.173382 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:53:03.207609 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:53:03.239807 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 15:53:03.271613 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 15:53:03.304865 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:53:03.336277 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/flannel-668101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 15:53:03.367070 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:53:03.399740 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:53:03.431108 1619158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:53:03.469922 1619158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:53:03.496991 1619158 ssh_runner.go:195] Run: openssl version
	I0630 15:53:03.503713 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:53:03.519935 1619158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:53:03.525171 1619158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:53:03.525235 1619158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:53:03.533074 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:53:03.546306 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:53:03.560844 1619158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:03.566199 1619158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:03.566277 1619158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:03.573685 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:53:03.589057 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:53:03.614844 1619158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:53:03.621765 1619158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:53:03.621846 1619158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:53:03.631593 1619158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:53:03.649952 1619158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:53:03.656577 1619158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:53:03.656636 1619158 kubeadm.go:392] StartCluster: {Name:flannel-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:flannel-6681
01 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:53:03.656726 1619158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:53:03.656792 1619158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:53:03.706253 1619158 cri.go:89] found id: ""
	I0630 15:53:03.706351 1619158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:53:03.718137 1619158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:53:03.730377 1619158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:53:03.745839 1619158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:53:03.745864 1619158 kubeadm.go:157] found existing configuration files:
	
	I0630 15:53:03.745922 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:53:03.757621 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:53:03.757687 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:53:03.771916 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:53:03.784628 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:53:03.784695 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:53:03.798159 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:53:03.809990 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:53:03.810067 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:53:03.822466 1619158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:53:03.834020 1619158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:53:03.834138 1619158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:53:03.845749 1619158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:53:04.003225 1619158 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:53:04.474834 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:04.495812 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:04.495894 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:04.545620 1612198 cri.go:89] found id: ""
	I0630 15:53:04.545652 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.545664 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:04.545674 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:04.545819 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:04.595168 1612198 cri.go:89] found id: ""
	I0630 15:53:04.595303 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.595325 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:04.595339 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:04.595423 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:04.648158 1612198 cri.go:89] found id: ""
	I0630 15:53:04.648189 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.648201 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:04.648210 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:04.648279 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:04.695407 1612198 cri.go:89] found id: ""
	I0630 15:53:04.695441 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.695452 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:04.695460 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:04.695525 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:04.745024 1612198 cri.go:89] found id: ""
	I0630 15:53:04.745059 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.745072 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:04.745079 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:04.745147 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:04.784238 1612198 cri.go:89] found id: ""
	I0630 15:53:04.784278 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.784291 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:04.784301 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:04.784375 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:04.828921 1612198 cri.go:89] found id: ""
	I0630 15:53:04.828962 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.828976 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:04.828986 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:04.829058 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:04.878950 1612198 cri.go:89] found id: ""
	I0630 15:53:04.878980 1612198 logs.go:282] 0 containers: []
	W0630 15:53:04.878992 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:04.879004 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:04.879021 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:04.898852 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:04.898883 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:04.994919 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:04.994955 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:04.994971 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:05.081838 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:05.081891 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:05.134599 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:05.134639 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:03.598543 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:03.599016 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:03.599041 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:03.599011 1620899 retry.go:31] will retry after 1.276009124s: waiting for domain to come up
	I0630 15:53:04.876532 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:04.877133 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:04.877161 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:04.877082 1620899 retry.go:31] will retry after 1.605247273s: waiting for domain to come up
	I0630 15:53:06.483950 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:06.484698 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:06.484730 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:06.484666 1620899 retry.go:31] will retry after 2.436119373s: waiting for domain to come up
	I0630 15:53:07.707840 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:07.724492 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:07.724584 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:07.764489 1612198 cri.go:89] found id: ""
	I0630 15:53:07.764533 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.764545 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:07.764553 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:07.764641 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:07.813734 1612198 cri.go:89] found id: ""
	I0630 15:53:07.813762 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.813771 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:07.813777 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:07.813838 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:07.866385 1612198 cri.go:89] found id: ""
	I0630 15:53:07.866412 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.866420 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:07.866426 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:07.866480 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:07.913274 1612198 cri.go:89] found id: ""
	I0630 15:53:07.913307 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.913317 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:07.913325 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:07.913394 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:07.966418 1612198 cri.go:89] found id: ""
	I0630 15:53:07.966461 1612198 logs.go:282] 0 containers: []
	W0630 15:53:07.966475 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:07.966484 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:07.966554 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:08.017379 1612198 cri.go:89] found id: ""
	I0630 15:53:08.017443 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.017457 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:08.017465 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:08.017559 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:08.070396 1612198 cri.go:89] found id: ""
	I0630 15:53:08.070427 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.070440 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:08.070449 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:08.070519 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:08.118074 1612198 cri.go:89] found id: ""
	I0630 15:53:08.118118 1612198 logs.go:282] 0 containers: []
	W0630 15:53:08.118132 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:08.118146 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:08.118164 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:08.139695 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:08.139728 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:08.252659 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:08.252683 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:08.252698 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:08.381553 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:08.381602 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:08.448865 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:08.448912 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:11.032838 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:11.059173 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:11.059251 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:11.115790 1612198 cri.go:89] found id: ""
	I0630 15:53:11.115826 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.115839 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:11.115848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:11.115920 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:11.175246 1612198 cri.go:89] found id: ""
	I0630 15:53:11.175295 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.175307 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:11.175316 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:11.175389 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:11.230317 1612198 cri.go:89] found id: ""
	I0630 15:53:11.230349 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.230360 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:11.230368 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:11.230437 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:11.283786 1612198 cri.go:89] found id: ""
	I0630 15:53:11.283827 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.283839 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:11.283848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:11.283927 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:11.334412 1612198 cri.go:89] found id: ""
	I0630 15:53:11.334437 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.334445 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:11.334451 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:11.334508 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:11.399160 1612198 cri.go:89] found id: ""
	I0630 15:53:11.399195 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.399208 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:11.399218 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:11.399307 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:11.461034 1612198 cri.go:89] found id: ""
	I0630 15:53:11.461065 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.461078 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:11.461087 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:11.461144 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:11.509139 1612198 cri.go:89] found id: ""
	I0630 15:53:11.509169 1612198 logs.go:282] 0 containers: []
	W0630 15:53:11.509180 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:11.509194 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:11.509217 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:11.560268 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:11.560316 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:11.616198 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:11.616253 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:11.636775 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:11.636820 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:11.735910 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:11.735936 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:11.735954 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:08.922659 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:08.923323 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:08.923356 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:08.923288 1620899 retry.go:31] will retry after 3.297531276s: waiting for domain to come up
	I0630 15:53:12.222353 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:12.223035 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:12.223068 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:12.222990 1620899 retry.go:31] will retry after 3.51443735s: waiting for domain to come up
	I0630 15:53:17.014584 1619158 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 15:53:17.014637 1619158 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:53:17.014706 1619158 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:53:17.014838 1619158 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:53:17.014964 1619158 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 15:53:17.015057 1619158 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:53:17.016771 1619158 out.go:235]   - Generating certificates and keys ...
	I0630 15:53:17.016879 1619158 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:53:17.016954 1619158 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:53:17.017037 1619158 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 15:53:17.017140 1619158 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 15:53:17.017235 1619158 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 15:53:17.017318 1619158 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 15:53:17.017382 1619158 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 15:53:17.017508 1619158 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-668101 localhost] and IPs [192.168.50.164 127.0.0.1 ::1]
	I0630 15:53:17.017557 1619158 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 15:53:17.017714 1619158 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-668101 localhost] and IPs [192.168.50.164 127.0.0.1 ::1]
	I0630 15:53:17.017816 1619158 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 15:53:17.017907 1619158 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 15:53:17.017980 1619158 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 15:53:17.018051 1619158 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:53:17.018104 1619158 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:53:17.018164 1619158 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 15:53:17.018252 1619158 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:53:17.018322 1619158 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:53:17.018382 1619158 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:53:17.018488 1619158 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:53:17.018583 1619158 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:53:17.020268 1619158 out.go:235]   - Booting up control plane ...
	I0630 15:53:17.020370 1619158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:53:17.020449 1619158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:53:17.020523 1619158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:53:17.020623 1619158 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:53:17.020700 1619158 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:53:17.020739 1619158 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:53:17.020859 1619158 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 15:53:17.020953 1619158 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 15:53:17.021008 1619158 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.332284ms
	I0630 15:53:17.021092 1619158 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 15:53:17.021178 1619158 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.50.164:8443/livez
	I0630 15:53:17.021267 1619158 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 15:53:17.021346 1619158 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 15:53:17.021442 1619158 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.131571599s
	I0630 15:53:17.021510 1619158 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.852171886s
	I0630 15:53:17.021568 1619158 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002662518s
	I0630 15:53:17.021665 1619158 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 15:53:17.021773 1619158 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 15:53:17.021830 1619158 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 15:53:17.022015 1619158 kubeadm.go:310] [mark-control-plane] Marking the node flannel-668101 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 15:53:17.022075 1619158 kubeadm.go:310] [bootstrap-token] Using token: ux2a4n.m86z51knn5xjib22
	I0630 15:53:17.023469 1619158 out.go:235]   - Configuring RBAC rules ...
	I0630 15:53:17.023592 1619158 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 15:53:17.023701 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 15:53:17.023848 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 15:53:17.023981 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 15:53:17.024113 1619158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 15:53:17.024200 1619158 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 15:53:17.024304 1619158 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 15:53:17.024347 1619158 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 15:53:17.024396 1619158 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 15:53:17.024424 1619158 kubeadm.go:310] 
	I0630 15:53:17.024503 1619158 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 15:53:17.024510 1619158 kubeadm.go:310] 
	I0630 15:53:17.024574 1619158 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 15:53:17.024580 1619158 kubeadm.go:310] 
	I0630 15:53:17.024600 1619158 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 15:53:17.024654 1619158 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 15:53:17.024696 1619158 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 15:53:17.024705 1619158 kubeadm.go:310] 
	I0630 15:53:17.024750 1619158 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 15:53:17.024756 1619158 kubeadm.go:310] 
	I0630 15:53:17.024799 1619158 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 15:53:17.024805 1619158 kubeadm.go:310] 
	I0630 15:53:17.024848 1619158 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 15:53:17.024952 1619158 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 15:53:17.025026 1619158 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 15:53:17.025033 1619158 kubeadm.go:310] 
	I0630 15:53:17.025114 1619158 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 15:53:17.025179 1619158 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 15:53:17.025185 1619158 kubeadm.go:310] 
	I0630 15:53:17.025258 1619158 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ux2a4n.m86z51knn5xjib22 \
	I0630 15:53:17.025350 1619158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 15:53:17.025370 1619158 kubeadm.go:310] 	--control-plane 
	I0630 15:53:17.025374 1619158 kubeadm.go:310] 
	I0630 15:53:17.025507 1619158 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 15:53:17.025515 1619158 kubeadm.go:310] 
	I0630 15:53:17.025583 1619158 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ux2a4n.m86z51knn5xjib22 \
	I0630 15:53:17.025707 1619158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 15:53:17.025719 1619158 cni.go:84] Creating CNI manager for "flannel"
	I0630 15:53:17.027099 1619158 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0630 15:53:14.327948 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:14.347007 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:14.347078 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:14.391736 1612198 cri.go:89] found id: ""
	I0630 15:53:14.391770 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.391782 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:14.391790 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:14.391855 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:14.438236 1612198 cri.go:89] found id: ""
	I0630 15:53:14.438274 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.438286 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:14.438294 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:14.438381 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:14.479508 1612198 cri.go:89] found id: ""
	I0630 15:53:14.479539 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.479550 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:14.479558 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:14.479618 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:14.530347 1612198 cri.go:89] found id: ""
	I0630 15:53:14.530386 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.530400 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:14.530409 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:14.530480 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:14.576356 1612198 cri.go:89] found id: ""
	I0630 15:53:14.576392 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.576404 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:14.576413 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:14.576491 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:14.627508 1612198 cri.go:89] found id: ""
	I0630 15:53:14.627546 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.627557 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:14.627565 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:14.627636 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:14.674780 1612198 cri.go:89] found id: ""
	I0630 15:53:14.674808 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.674824 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:14.674832 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:14.674899 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:14.717562 1612198 cri.go:89] found id: ""
	I0630 15:53:14.717599 1612198 logs.go:282] 0 containers: []
	W0630 15:53:14.717611 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:14.717624 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:14.717655 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:14.801031 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:14.801063 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:14.801083 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:14.890511 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:14.890559 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:14.953255 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:14.953300 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:15.023105 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:15.023160 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:17.543438 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:17.564446 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:17.564545 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:17.602287 1612198 cri.go:89] found id: ""
	I0630 15:53:17.602336 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.602349 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:17.602358 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:17.602449 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:17.643215 1612198 cri.go:89] found id: ""
	I0630 15:53:17.643246 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.643259 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:17.643266 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:17.643328 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:15.813970 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:15.814578 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find current IP address of domain bridge-668101 in network mk-bridge-668101
	I0630 15:53:15.814693 1620744 main.go:141] libmachine: (bridge-668101) DBG | I0630 15:53:15.814493 1620899 retry.go:31] will retry after 4.330770463s: waiting for domain to come up
	I0630 15:53:17.028285 1619158 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0630 15:53:17.034603 1619158 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.33.2/kubectl ...
	I0630 15:53:17.034627 1619158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0630 15:53:17.064463 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0630 15:53:17.543422 1619158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:53:17.543486 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:17.543598 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-668101 minikube.k8s.io/updated_at=2025_06_30T15_53_17_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=flannel-668101 minikube.k8s.io/primary=true
	I0630 15:53:17.594413 1619158 ops.go:34] apiserver oom_adj: -16
	I0630 15:53:17.727637 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:18.228526 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:18.727798 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:19.227728 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:19.728564 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:20.227759 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:20.728760 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:21.228341 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:21.728419 1619158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:21.856237 1619158 kubeadm.go:1105] duration metric: took 4.312811681s to wait for elevateKubeSystemPrivileges
	I0630 15:53:21.856299 1619158 kubeadm.go:394] duration metric: took 18.199648133s to StartCluster
	I0630 15:53:21.856325 1619158 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:21.856421 1619158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:53:21.857563 1619158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:21.857818 1619158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 15:53:21.857835 1619158 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.164 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:53:21.857909 1619158 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:53:21.858018 1619158 addons.go:69] Setting storage-provisioner=true in profile "flannel-668101"
	I0630 15:53:21.858038 1619158 addons.go:238] Setting addon storage-provisioner=true in "flannel-668101"
	I0630 15:53:21.858043 1619158 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:53:21.858037 1619158 addons.go:69] Setting default-storageclass=true in profile "flannel-668101"
	I0630 15:53:21.858077 1619158 host.go:66] Checking if "flannel-668101" exists ...
	I0630 15:53:21.858106 1619158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-668101"
	I0630 15:53:21.858566 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.858573 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.858594 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.858610 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.859497 1619158 out.go:177] * Verifying Kubernetes components...
	I0630 15:53:21.861465 1619158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:21.878756 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0630 15:53:21.879278 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.879431 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0630 15:53:21.879778 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.879797 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.879838 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.880325 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.880347 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.880358 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.880762 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.881385 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.881459 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.881515 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:53:21.885509 1619158 addons.go:238] Setting addon default-storageclass=true in "flannel-668101"
	I0630 15:53:21.885555 1619158 host.go:66] Checking if "flannel-668101" exists ...
	I0630 15:53:21.885936 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.885985 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.903264 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0630 15:53:21.903821 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.904198 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0630 15:53:21.904415 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.904440 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.904784 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.904851 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.905447 1619158 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:21.905503 1619158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:21.906077 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.906103 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.906550 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.906795 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:53:21.913135 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:53:21.915545 1619158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:53:17.684398 1612198 cri.go:89] found id: ""
	I0630 15:53:17.684474 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.684484 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:17.684493 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:17.684567 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:17.734640 1612198 cri.go:89] found id: ""
	I0630 15:53:17.734681 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.734694 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:17.734702 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:17.734787 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:17.771368 1612198 cri.go:89] found id: ""
	I0630 15:53:17.771404 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.771416 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:17.771425 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:17.771497 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:17.828694 1612198 cri.go:89] found id: ""
	I0630 15:53:17.828724 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.828732 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:17.828741 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:17.828815 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:17.870487 1612198 cri.go:89] found id: ""
	I0630 15:53:17.870535 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.870549 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:17.870558 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:17.870639 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:17.907397 1612198 cri.go:89] found id: ""
	I0630 15:53:17.907430 1612198 logs.go:282] 0 containers: []
	W0630 15:53:17.907440 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:17.907451 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:17.907464 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:17.983887 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:17.983934 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:18.027406 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:18.027439 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:18.079092 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:18.079140 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:18.094309 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:18.094345 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:18.168726 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:20.669207 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:20.688479 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:20.688575 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:20.729290 1612198 cri.go:89] found id: ""
	I0630 15:53:20.729317 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.729327 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:20.729334 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:20.729399 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:20.772585 1612198 cri.go:89] found id: ""
	I0630 15:53:20.772606 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.772638 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:20.772647 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:20.772704 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:20.815369 1612198 cri.go:89] found id: ""
	I0630 15:53:20.815407 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.815419 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:20.815428 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:20.815490 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:20.856251 1612198 cri.go:89] found id: ""
	I0630 15:53:20.856282 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.856294 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:20.856304 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:20.856371 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:20.895690 1612198 cri.go:89] found id: ""
	I0630 15:53:20.895723 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.895732 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:20.895743 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:20.895823 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:20.938040 1612198 cri.go:89] found id: ""
	I0630 15:53:20.938075 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.938085 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:20.938094 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:20.938163 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:20.983241 1612198 cri.go:89] found id: ""
	I0630 15:53:20.983280 1612198 logs.go:282] 0 containers: []
	W0630 15:53:20.983293 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:20.983302 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:20.983373 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:21.029599 1612198 cri.go:89] found id: ""
	I0630 15:53:21.029633 1612198 logs.go:282] 0 containers: []
	W0630 15:53:21.029645 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:21.029659 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:21.029675 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:21.115729 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:21.115753 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:21.115766 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:21.192780 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:21.192824 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:21.238081 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:21.238141 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:21.298363 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:21.298437 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:20.150210 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.151081 1620744 main.go:141] libmachine: (bridge-668101) found domain IP: 192.168.72.11
	I0630 15:53:20.151108 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has current primary IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.151118 1620744 main.go:141] libmachine: (bridge-668101) reserving static IP address...
	I0630 15:53:20.151802 1620744 main.go:141] libmachine: (bridge-668101) DBG | unable to find host DHCP lease matching {name: "bridge-668101", mac: "52:54:00:de:25:66", ip: "192.168.72.11"} in network mk-bridge-668101
	I0630 15:53:20.255604 1620744 main.go:141] libmachine: (bridge-668101) reserved static IP address 192.168.72.11 for domain bridge-668101
	I0630 15:53:20.255640 1620744 main.go:141] libmachine: (bridge-668101) waiting for SSH...
	I0630 15:53:20.255651 1620744 main.go:141] libmachine: (bridge-668101) DBG | Getting to WaitForSSH function...
	I0630 15:53:20.259016 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.259553 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.259578 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.259789 1620744 main.go:141] libmachine: (bridge-668101) DBG | Using SSH client type: external
	I0630 15:53:20.259817 1620744 main.go:141] libmachine: (bridge-668101) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa (-rw-------)
	I0630 15:53:20.259855 1620744 main.go:141] libmachine: (bridge-668101) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 15:53:20.259878 1620744 main.go:141] libmachine: (bridge-668101) DBG | About to run SSH command:
	I0630 15:53:20.259893 1620744 main.go:141] libmachine: (bridge-668101) DBG | exit 0
	I0630 15:53:20.389637 1620744 main.go:141] libmachine: (bridge-668101) DBG | SSH cmd err, output: <nil>: 
	I0630 15:53:20.390056 1620744 main.go:141] libmachine: (bridge-668101) KVM machine creation complete
	I0630 15:53:20.390289 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetConfigRaw
	I0630 15:53:20.390852 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:20.391109 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:20.391342 1620744 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 15:53:20.391357 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:20.392814 1620744 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 15:53:20.392829 1620744 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 15:53:20.392834 1620744 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 15:53:20.392840 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.396358 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.396743 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.396783 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.397085 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.397290 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.397458 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.397650 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.397853 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.398148 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.398164 1620744 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 15:53:20.508895 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:53:20.508932 1620744 main.go:141] libmachine: Detecting the provisioner...
	I0630 15:53:20.508944 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.512198 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.512629 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.512658 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.512888 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.513085 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.513290 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.513461 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.513609 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.513804 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.513814 1620744 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 15:53:20.626452 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 15:53:20.626583 1620744 main.go:141] libmachine: found compatible host: buildroot
	I0630 15:53:20.626595 1620744 main.go:141] libmachine: Provisioning with buildroot...
	I0630 15:53:20.626603 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:53:20.626863 1620744 buildroot.go:166] provisioning hostname "bridge-668101"
	I0630 15:53:20.626886 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:53:20.627111 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.630431 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.631000 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.631029 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.631318 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.631539 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.631746 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.631891 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.632041 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.632253 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.632267 1620744 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-668101 && echo "bridge-668101" | sudo tee /etc/hostname
	I0630 15:53:20.768072 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-668101
	
	I0630 15:53:20.768109 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.772078 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.772554 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.772641 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.772981 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:20.773268 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.773482 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:20.773700 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:20.773939 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:20.774161 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:20.774183 1620744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-668101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-668101/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-668101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 15:53:20.912221 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 15:53:20.912262 1620744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1550299/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1550299/.minikube}
	I0630 15:53:20.912306 1620744 buildroot.go:174] setting up certificates
	I0630 15:53:20.912324 1620744 provision.go:84] configureAuth start
	I0630 15:53:20.912343 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetMachineName
	I0630 15:53:20.912731 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:20.916012 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.916475 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.916519 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.916686 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:20.919828 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.920293 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:20.920328 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:20.920495 1620744 provision.go:143] copyHostCerts
	I0630 15:53:20.920585 1620744 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem, removing ...
	I0630 15:53:20.920609 1620744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem
	I0630 15:53:20.920712 1620744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.pem (1078 bytes)
	I0630 15:53:20.920869 1620744 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem, removing ...
	I0630 15:53:20.920882 1620744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem
	I0630 15:53:20.920919 1620744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/cert.pem (1123 bytes)
	I0630 15:53:20.921008 1620744 exec_runner.go:144] found /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem, removing ...
	I0630 15:53:20.921018 1620744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem
	I0630 15:53:20.921044 1620744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1550299/.minikube/key.pem (1675 bytes)
	I0630 15:53:20.921126 1620744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem org=jenkins.bridge-668101 san=[127.0.0.1 192.168.72.11 bridge-668101 localhost minikube]
	I0630 15:53:21.264068 1620744 provision.go:177] copyRemoteCerts
	I0630 15:53:21.264165 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 15:53:21.264213 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.268086 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.268409 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.268452 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.268601 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.268924 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.269110 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.269238 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:21.361451 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 15:53:21.391187 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 15:53:21.419255 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 15:53:21.448237 1620744 provision.go:87] duration metric: took 535.893652ms to configureAuth
	I0630 15:53:21.448274 1620744 buildroot.go:189] setting minikube options for container-runtime
	I0630 15:53:21.448476 1620744 config.go:182] Loaded profile config "bridge-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:53:21.448584 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.453284 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.453882 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.453912 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.454135 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.454353 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.454521 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.454680 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.454822 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:21.455051 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:21.455078 1620744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0630 15:53:21.715413 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0630 15:53:21.715442 1620744 main.go:141] libmachine: Checking connection to Docker...
	I0630 15:53:21.715451 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetURL
	I0630 15:53:21.716819 1620744 main.go:141] libmachine: (bridge-668101) DBG | using libvirt version 6000000
	I0630 15:53:21.719440 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.719824 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.719856 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.719970 1620744 main.go:141] libmachine: Docker is up and running!
	I0630 15:53:21.719983 1620744 main.go:141] libmachine: Reticulating splines...
	I0630 15:53:21.719993 1620744 client.go:171] duration metric: took 25.917938791s to LocalClient.Create
	I0630 15:53:21.720027 1620744 start.go:167] duration metric: took 25.918028738s to libmachine.API.Create "bridge-668101"
	I0630 15:53:21.720040 1620744 start.go:293] postStartSetup for "bridge-668101" (driver="kvm2")
	I0630 15:53:21.720054 1620744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 15:53:21.720081 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.720445 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 15:53:21.720475 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.723380 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.723862 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.723895 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.724514 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.724885 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.725127 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.725432 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:21.813595 1620744 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 15:53:21.818546 1620744 info.go:137] Remote host: Buildroot 2025.02
	I0630 15:53:21.818584 1620744 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/addons for local assets ...
	I0630 15:53:21.818645 1620744 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1550299/.minikube/files for local assets ...
	I0630 15:53:21.818728 1620744 filesync.go:149] local asset: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem -> 15577322.pem in /etc/ssl/certs
	I0630 15:53:21.818833 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0630 15:53:21.830037 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:53:21.862135 1620744 start.go:296] duration metric: took 142.08086ms for postStartSetup
	I0630 15:53:21.862197 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetConfigRaw
	I0630 15:53:21.862968 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:21.866304 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.866720 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.866752 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.867254 1620744 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/config.json ...
	I0630 15:53:21.867599 1620744 start.go:128] duration metric: took 26.08874701s to createHost
	I0630 15:53:21.867640 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.870855 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.871356 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.871397 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.871563 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:21.871789 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.871989 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.872148 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:21.872344 1620744 main.go:141] libmachine: Using SSH client type: native
	I0630 15:53:21.872607 1620744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I0630 15:53:21.872619 1620744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 15:53:21.990814 1620744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751298801.970811827
	
	I0630 15:53:21.990846 1620744 fix.go:216] guest clock: 1751298801.970811827
	I0630 15:53:21.990856 1620744 fix.go:229] Guest: 2025-06-30 15:53:21.970811827 +0000 UTC Remote: 2025-06-30 15:53:21.867622048 +0000 UTC m=+38.958890662 (delta=103.189779ms)
	I0630 15:53:21.990888 1620744 fix.go:200] guest clock delta is within tolerance: 103.189779ms
	I0630 15:53:21.990895 1620744 start.go:83] releasing machines lock for "bridge-668101", held for 26.212259549s
	I0630 15:53:21.990921 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.991256 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:21.994862 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.995334 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:21.995365 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:21.995601 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.996174 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.996422 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:21.996540 1620744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 15:53:21.996586 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:21.996665 1620744 ssh_runner.go:195] Run: cat /version.json
	I0630 15:53:21.996697 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:22.000078 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.000431 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:22.000471 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.000574 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.000868 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:22.001096 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:22.001101 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:22.001197 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:22.001278 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:22.001303 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:22.001484 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:22.001499 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:22.001633 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:22.001809 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:22.115933 1620744 ssh_runner.go:195] Run: systemctl --version
	I0630 15:53:22.124264 1620744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0630 15:53:22.297158 1620744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 15:53:22.303464 1620744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 15:53:22.303535 1620744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 15:53:22.322898 1620744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 15:53:22.322933 1620744 start.go:495] detecting cgroup driver to use...
	I0630 15:53:22.323033 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0630 15:53:22.346693 1620744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0630 15:53:22.370685 1620744 docker.go:230] disabling cri-docker service (if available) ...
	I0630 15:53:22.370799 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 15:53:22.388014 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 15:53:22.405538 1620744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 15:53:22.556327 1620744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 15:53:22.736266 1620744 docker.go:246] disabling docker service ...
	I0630 15:53:22.736364 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 15:53:22.755856 1620744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 15:53:22.773629 1620744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 15:53:21.916791 1619158 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:21.916818 1619158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 15:53:21.916850 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:53:21.920269 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.920634 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:53:21.920657 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.920814 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:53:21.921063 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.921260 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:53:21.921462 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:53:21.930939 1619158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I0630 15:53:21.931592 1619158 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:21.932329 1619158 main.go:141] libmachine: Using API Version  1
	I0630 15:53:21.932352 1619158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:21.932845 1619158 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:21.933076 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetState
	I0630 15:53:21.935023 1619158 main.go:141] libmachine: (flannel-668101) Calling .DriverName
	I0630 15:53:21.935343 1619158 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:21.935362 1619158 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 15:53:21.935385 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHHostname
	I0630 15:53:21.938667 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.939066 1619158 main.go:141] libmachine: (flannel-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:56:26", ip: ""} in network mk-flannel-668101: {Iface:virbr2 ExpiryTime:2025-06-30 16:52:42 +0000 UTC Type:0 Mac:52:54:00:d0:56:26 Iaid: IPaddr:192.168.50.164 Prefix:24 Hostname:flannel-668101 Clientid:01:52:54:00:d0:56:26}
	I0630 15:53:21.939089 1619158 main.go:141] libmachine: (flannel-668101) DBG | domain flannel-668101 has defined IP address 192.168.50.164 and MAC address 52:54:00:d0:56:26 in network mk-flannel-668101
	I0630 15:53:21.939228 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHPort
	I0630 15:53:21.939438 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHKeyPath
	I0630 15:53:21.939561 1619158 main.go:141] libmachine: (flannel-668101) Calling .GetSSHUsername
	I0630 15:53:21.939667 1619158 sshutil.go:53] new ssh client: &{IP:192.168.50.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/flannel-668101/id_rsa Username:docker}
	I0630 15:53:22.100716 1619158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 15:53:22.185715 1619158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:22.445585 1619158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:22.457596 1619158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:22.671125 1619158 start.go:972] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0630 15:53:22.672317 1619158 node_ready.go:35] waiting up to 15m0s for node "flannel-668101" to be "Ready" ...
	I0630 15:53:22.953479 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:22.953512 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:22.953863 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:22.953868 1619158 main.go:141] libmachine: (flannel-668101) DBG | Closing plugin on server side
	I0630 15:53:22.953885 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:22.953895 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:22.953902 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:22.954132 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:22.954147 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:22.966064 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:22.966091 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:22.966575 1619158 main.go:141] libmachine: (flannel-668101) DBG | Closing plugin on server side
	I0630 15:53:22.966595 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:22.966608 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:23.178366 1619158 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-668101" context rescaled to 1 replicas
	I0630 15:53:23.182951 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:23.182983 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:23.183310 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:23.183341 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:23.183352 1619158 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:23.183359 1619158 main.go:141] libmachine: (flannel-668101) Calling .Close
	I0630 15:53:23.183771 1619158 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:23.183785 1619158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:23.183846 1619158 main.go:141] libmachine: (flannel-668101) DBG | Closing plugin on server side
	I0630 15:53:23.185609 1619158 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0630 15:53:22.968973 1620744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 15:53:23.133301 1620744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 15:53:23.155249 1620744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 15:53:23.183726 1620744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0630 15:53:23.183827 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.198004 1620744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0630 15:53:23.198112 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.210920 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.223143 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.235289 1620744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 15:53:23.248292 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.260423 1620744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.280821 1620744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0630 15:53:23.293185 1620744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 15:53:23.305009 1620744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 15:53:23.305155 1620744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 15:53:23.321828 1620744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 15:53:23.333118 1620744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:23.476277 1620744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0630 15:53:23.585009 1620744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0630 15:53:23.585109 1620744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0630 15:53:23.590082 1620744 start.go:563] Will wait 60s for crictl version
	I0630 15:53:23.590166 1620744 ssh_runner.go:195] Run: which crictl
	I0630 15:53:23.593975 1620744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 15:53:23.637313 1620744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0630 15:53:23.637475 1620744 ssh_runner.go:195] Run: crio --version
	I0630 15:53:23.668285 1620744 ssh_runner.go:195] Run: crio --version
	I0630 15:53:23.699975 1620744 out.go:177] * Preparing Kubernetes v1.33.2 on CRI-O 1.29.1 ...
	I0630 15:53:23.186948 1619158 addons.go:514] duration metric: took 1.329044999s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0630 15:53:24.675577 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	I0630 15:53:23.816993 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:23.835380 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:23.835460 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:23.877562 1612198 cri.go:89] found id: ""
	I0630 15:53:23.877598 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.877610 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:23.877618 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:23.877695 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:23.919089 1612198 cri.go:89] found id: ""
	I0630 15:53:23.919130 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.919144 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:23.919152 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:23.919232 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:23.964835 1612198 cri.go:89] found id: ""
	I0630 15:53:23.964864 1612198 logs.go:282] 0 containers: []
	W0630 15:53:23.964875 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:23.964883 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:23.964956 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:24.011639 1612198 cri.go:89] found id: ""
	I0630 15:53:24.011680 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.011694 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:24.011704 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:24.011791 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:24.059206 1612198 cri.go:89] found id: ""
	I0630 15:53:24.059240 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.059250 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:24.059262 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:24.059335 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:24.116479 1612198 cri.go:89] found id: ""
	I0630 15:53:24.116517 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.116530 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:24.116540 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:24.116619 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:24.164108 1612198 cri.go:89] found id: ""
	I0630 15:53:24.164142 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.164153 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:24.164162 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:24.164235 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:24.232264 1612198 cri.go:89] found id: ""
	I0630 15:53:24.232299 1612198 logs.go:282] 0 containers: []
	W0630 15:53:24.232312 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:24.232325 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:24.232343 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:24.334546 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:24.334577 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:24.334597 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:24.450906 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:24.450963 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:24.523317 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:24.523361 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:24.609506 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:24.609547 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:27.134042 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:27.156543 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:27.156635 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:27.206777 1612198 cri.go:89] found id: ""
	I0630 15:53:27.206819 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.206831 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:27.206841 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:27.206924 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:27.257098 1612198 cri.go:89] found id: ""
	I0630 15:53:27.257141 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.257153 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:27.257162 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:27.257226 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:27.311101 1612198 cri.go:89] found id: ""
	I0630 15:53:27.311129 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.311137 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:27.311164 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:27.311233 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:27.356225 1612198 cri.go:89] found id: ""
	I0630 15:53:27.356264 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.356276 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:27.356285 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:27.356446 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:27.408114 1612198 cri.go:89] found id: ""
	I0630 15:53:27.408173 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.408185 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:27.408194 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:27.408264 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:27.453433 1612198 cri.go:89] found id: ""
	I0630 15:53:27.453471 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.453483 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:27.453491 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:27.453560 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:27.502170 1612198 cri.go:89] found id: ""
	I0630 15:53:27.502209 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.502222 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:27.502230 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:27.502304 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:27.539066 1612198 cri.go:89] found id: ""
	I0630 15:53:27.539104 1612198 logs.go:282] 0 containers: []
	W0630 15:53:27.539113 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:27.539124 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:27.539157 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:27.557767 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:27.557807 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:27.661895 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:27.661924 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:27.661943 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:23.701364 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetIP
	I0630 15:53:23.704233 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:23.704638 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:23.704669 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:23.704895 1620744 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0630 15:53:23.709158 1620744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:53:23.723315 1620744 kubeadm.go:875] updating cluster {Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:bridge-668
101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 15:53:23.723444 1620744 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 15:53:23.723509 1620744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:53:23.763562 1620744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 15:53:23.763659 1620744 ssh_runner.go:195] Run: which lz4
	I0630 15:53:23.769114 1620744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 15:53:23.774965 1620744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 15:53:23.775007 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (421067896 bytes)
	I0630 15:53:25.374857 1620744 crio.go:462] duration metric: took 1.60580082s to copy over tarball
	I0630 15:53:25.374981 1620744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 15:53:27.865991 1620744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.490972706s)
	I0630 15:53:27.866033 1620744 crio.go:469] duration metric: took 2.491137727s to extract the tarball
	I0630 15:53:27.866044 1620744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 15:53:27.908959 1620744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 15:53:27.960351 1620744 crio.go:514] all images are preloaded for cri-o runtime.
	I0630 15:53:27.960383 1620744 cache_images.go:84] Images are preloaded, skipping loading
	I0630 15:53:27.960392 1620744 kubeadm.go:926] updating node { 192.168.72.11 8443 v1.33.2 crio true true} ...
	I0630 15:53:27.960497 1620744 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-668101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:bridge-668101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0630 15:53:27.960566 1620744 ssh_runner.go:195] Run: crio config
	I0630 15:53:28.007607 1620744 cni.go:84] Creating CNI manager for "bridge"
	I0630 15:53:28.007639 1620744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 15:53:28.007668 1620744 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-668101 NodeName:bridge-668101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 15:53:28.007874 1620744 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-668101"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.11"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 15:53:28.007956 1620744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 15:53:28.019439 1620744 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 15:53:28.019533 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 15:53:28.030681 1620744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0630 15:53:28.054217 1620744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 15:53:28.078657 1620744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0630 15:53:28.103175 1620744 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I0630 15:53:28.107637 1620744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 15:53:28.121750 1620744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:28.271570 1620744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:28.301805 1620744 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101 for IP: 192.168.72.11
	I0630 15:53:28.301846 1620744 certs.go:194] generating shared ca certs ...
	I0630 15:53:28.301873 1620744 certs.go:226] acquiring lock for ca certs: {Name:mk773029d2b53ceb6ec3c9684abd5c02b7891701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.302109 1620744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key
	I0630 15:53:28.302183 1620744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key
	I0630 15:53:28.302206 1620744 certs.go:256] generating profile certs ...
	I0630 15:53:28.302293 1620744 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.key
	I0630 15:53:28.302316 1620744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt with IP's: []
	I0630 15:53:28.454855 1620744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt ...
	I0630 15:53:28.454891 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.crt: {Name:mk937708224110c3dd03876ac97fd50296fa97e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.455077 1620744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.key ...
	I0630 15:53:28.455095 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/client.key: {Name:mkabac9afc77f4fa227e818a7db37dc6cde93101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.455181 1620744 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f
	I0630 15:53:28.455199 1620744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.11]
	I0630 15:53:28.535439 1620744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f ...
	I0630 15:53:28.535477 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f: {Name:mkb3d4c341f11f3a902e7d6409776e997bb9f0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.535666 1620744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f ...
	I0630 15:53:28.535680 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f: {Name:mkb836a4b78458ae1ce3c620e0b6b74aca7afa96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.535756 1620744 certs.go:381] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt.9a49803f -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt
	I0630 15:53:28.535850 1620744 certs.go:385] copying /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key.9a49803f -> /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key
	I0630 15:53:28.535911 1620744 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key
	I0630 15:53:28.535927 1620744 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt with IP's: []
	I0630 15:53:28.888408 1620744 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt ...
	I0630 15:53:28.888451 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt: {Name:mkf4d3b4ec0f8a5e1d05a277edfc5ceb8007805d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.888663 1620744 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key ...
	I0630 15:53:28.888680 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key: {Name:mk8ac529262f2861b6afd57f5e5bb4e1423ec462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:28.888902 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem (1338 bytes)
	W0630 15:53:28.888952 1620744 certs.go:480] ignoring /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732_empty.pem, impossibly tiny 0 bytes
	I0630 15:53:28.888967 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 15:53:28.889001 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/ca.pem (1078 bytes)
	I0630 15:53:28.889037 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/cert.pem (1123 bytes)
	I0630 15:53:28.889066 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/key.pem (1675 bytes)
	I0630 15:53:28.889125 1620744 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem (1708 bytes)
	I0630 15:53:28.889775 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 15:53:28.927242 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 15:53:28.967550 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 15:53:29.017537 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0630 15:53:29.055944 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 15:53:29.085822 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0630 15:53:29.183293 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 15:53:29.217912 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/bridge-668101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 15:53:29.249508 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/certs/1557732.pem --> /usr/share/ca-certificates/1557732.pem (1338 bytes)
	I0630 15:53:29.281853 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/ssl/certs/15577322.pem --> /usr/share/ca-certificates/15577322.pem (1708 bytes)
	I0630 15:53:29.312083 1620744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1550299/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 15:53:29.346274 1620744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 15:53:29.368862 1620744 ssh_runner.go:195] Run: openssl version
	I0630 15:53:29.376652 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1557732.pem && ln -fs /usr/share/ca-certificates/1557732.pem /etc/ssl/certs/1557732.pem"
	I0630 15:53:29.391675 1620744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1557732.pem
	I0630 15:53:29.396844 1620744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 30 14:38 /usr/share/ca-certificates/1557732.pem
	I0630 15:53:29.396917 1620744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1557732.pem
	I0630 15:53:29.404281 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1557732.pem /etc/ssl/certs/51391683.0"
	I0630 15:53:29.417581 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15577322.pem && ln -fs /usr/share/ca-certificates/15577322.pem /etc/ssl/certs/15577322.pem"
	I0630 15:53:29.430622 1620744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15577322.pem
	I0630 15:53:29.436093 1620744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 30 14:38 /usr/share/ca-certificates/15577322.pem
	I0630 15:53:29.436174 1620744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15577322.pem
	I0630 15:53:29.443611 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15577322.pem /etc/ssl/certs/3ec20f2e.0"
	I0630 15:53:29.457568 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 15:53:29.471747 1620744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:29.477296 1620744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:18 /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:29.477380 1620744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 15:53:29.485268 1620744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 15:53:29.498865 1620744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 15:53:29.504743 1620744 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 15:53:29.504819 1620744 kubeadm.go:392] StartCluster: {Name:bridge-668101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:bridge-668101
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 15:53:29.504990 1620744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0630 15:53:29.505114 1620744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 15:53:29.554378 1620744 cri.go:89] found id: ""
	I0630 15:53:29.554448 1620744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 15:53:29.566684 1620744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:53:29.580816 1620744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:53:29.594087 1620744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:53:29.594122 1620744 kubeadm.go:157] found existing configuration files:
	
	I0630 15:53:29.594198 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:53:29.606128 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:53:29.606208 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:53:29.617824 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:53:29.628760 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:53:29.628849 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:53:29.643046 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:53:29.654618 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:53:29.654744 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:53:29.670789 1620744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:53:29.686439 1620744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:53:29.686511 1620744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:53:29.701021 1620744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:53:29.759278 1620744 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 15:53:29.759355 1620744 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:53:29.854960 1620744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:53:29.855106 1620744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:53:29.855286 1620744 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 15:53:29.866548 1620744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0630 15:53:27.181869 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	W0630 15:53:29.675930 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	I0630 15:53:27.767088 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:27.767156 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:27.814647 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:27.814683 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:30.372878 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:30.392885 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:30.392993 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:30.450197 1612198 cri.go:89] found id: ""
	I0630 15:53:30.450235 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.450248 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:30.450258 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:30.450342 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:30.507009 1612198 cri.go:89] found id: ""
	I0630 15:53:30.507041 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.507051 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:30.507060 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:30.507147 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:30.554455 1612198 cri.go:89] found id: ""
	I0630 15:53:30.554485 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.554496 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:30.554505 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:30.554572 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:30.598785 1612198 cri.go:89] found id: ""
	I0630 15:53:30.598821 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.598833 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:30.598841 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:30.598911 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:30.634661 1612198 cri.go:89] found id: ""
	I0630 15:53:30.634701 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.634713 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:30.634722 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:30.634794 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:30.674870 1612198 cri.go:89] found id: ""
	I0630 15:53:30.674903 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.674913 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:30.674922 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:30.674984 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:30.715843 1612198 cri.go:89] found id: ""
	I0630 15:53:30.715873 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.715882 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:30.715889 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:30.715947 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:30.752318 1612198 cri.go:89] found id: ""
	I0630 15:53:30.752356 1612198 logs.go:282] 0 containers: []
	W0630 15:53:30.752375 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:30.752390 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:30.752406 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:30.824741 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:30.824784 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:30.838605 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:30.838640 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:30.915839 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:30.915924 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:30.915959 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:30.999770 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:30.999820 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:29.943503 1620744 out.go:235]   - Generating certificates and keys ...
	I0630 15:53:29.943673 1620744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:53:29.943767 1620744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:53:30.013369 1620744 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 15:53:30.204256 1620744 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 15:53:30.247370 1620744 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 15:53:30.347086 1620744 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 15:53:30.905210 1620744 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 15:53:30.905417 1620744 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-668101 localhost] and IPs [192.168.72.11 127.0.0.1 ::1]
	I0630 15:53:30.977829 1620744 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 15:53:30.978113 1620744 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-668101 localhost] and IPs [192.168.72.11 127.0.0.1 ::1]
	I0630 15:53:31.175683 1620744 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 15:53:31.342818 1620744 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 15:53:32.050944 1620744 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 15:53:32.051027 1620744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:53:32.176724 1620744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:53:32.249204 1620744 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 15:53:32.600906 1620744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:53:33.139702 1620744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:53:33.541220 1620744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:53:33.541742 1620744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:53:33.544105 1620744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W0630 15:53:31.676642 1619158 node_ready.go:57] node "flannel-668101" has "Ready":"False" status (will retry)
	I0630 15:53:32.675850 1619158 node_ready.go:49] node "flannel-668101" is "Ready"
	I0630 15:53:32.675909 1619158 node_ready.go:38] duration metric: took 10.003542336s for node "flannel-668101" to be "Ready" ...
	I0630 15:53:32.675929 1619158 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:53:32.676002 1619158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:32.701943 1619158 api_server.go:72] duration metric: took 10.844066824s to wait for apiserver process to appear ...
	I0630 15:53:32.701974 1619158 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:53:32.701996 1619158 api_server.go:253] Checking apiserver healthz at https://192.168.50.164:8443/healthz ...
	I0630 15:53:32.706791 1619158 api_server.go:279] https://192.168.50.164:8443/healthz returned 200:
	ok
	I0630 15:53:32.708016 1619158 api_server.go:141] control plane version: v1.33.2
	I0630 15:53:32.708046 1619158 api_server.go:131] duration metric: took 6.062225ms to wait for apiserver health ...
	I0630 15:53:32.708058 1619158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:53:32.716053 1619158 system_pods.go:59] 7 kube-system pods found
	I0630 15:53:32.716114 1619158 system_pods.go:61] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:32.716121 1619158 system_pods.go:61] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:32.716130 1619158 system_pods.go:61] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:32.716136 1619158 system_pods.go:61] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:32.716146 1619158 system_pods.go:61] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:32.716151 1619158 system_pods.go:61] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:32.716159 1619158 system_pods.go:61] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:32.716169 1619158 system_pods.go:74] duration metric: took 8.103111ms to wait for pod list to return data ...
	I0630 15:53:32.716184 1619158 default_sa.go:34] waiting for default service account to be created ...
	I0630 15:53:32.721014 1619158 default_sa.go:45] found service account: "default"
	I0630 15:53:32.721045 1619158 default_sa.go:55] duration metric: took 4.852192ms for default service account to be created ...
	I0630 15:53:32.721059 1619158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 15:53:32.729131 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:32.729169 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:32.729178 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:32.729186 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:32.729192 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:32.729197 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:32.729208 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:32.729215 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:32.729252 1619158 retry.go:31] will retry after 311.225306ms: missing components: kube-dns
	I0630 15:53:33.046517 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:33.046552 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:33.046558 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:33.046563 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:33.046567 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:33.046571 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:33.046574 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:33.046578 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:33.046594 1619158 retry.go:31] will retry after 361.483143ms: missing components: kube-dns
	I0630 15:53:33.413105 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:33.413142 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:33.413148 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:33.413154 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:33.413159 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:33.413163 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:33.413171 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:33.413175 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:33.413191 1619158 retry.go:31] will retry after 423.305566ms: missing components: kube-dns
	I0630 15:53:33.853206 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:33.853242 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:33.853259 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:33.853267 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:33.853272 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:33.853277 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:33.853282 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:33.853287 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:33.853305 1619158 retry.go:31] will retry after 554.816826ms: missing components: kube-dns
	I0630 15:53:34.414917 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:34.414989 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:34.415017 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:34.415029 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:34.415036 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:34.415042 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:34.415047 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:34.415057 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:34.415250 1619158 retry.go:31] will retry after 473.364986ms: missing components: kube-dns
	I0630 15:53:34.892811 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:34.892851 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:34.892857 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:34.892863 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:34.892866 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:34.892870 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:34.892873 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:34.892877 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:34.892893 1619158 retry.go:31] will retry after 582.108906ms: missing components: kube-dns
	I0630 15:53:33.553483 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:33.570047 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:33.570150 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:33.616739 1612198 cri.go:89] found id: ""
	I0630 15:53:33.616775 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.616788 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:33.616798 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:33.616865 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:33.659234 1612198 cri.go:89] found id: ""
	I0630 15:53:33.659265 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.659277 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:33.659285 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:33.659353 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:33.697938 1612198 cri.go:89] found id: ""
	I0630 15:53:33.697977 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.697989 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:33.697997 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:33.698115 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:33.739043 1612198 cri.go:89] found id: ""
	I0630 15:53:33.739104 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.739118 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:33.739127 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:33.739200 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:33.781947 1612198 cri.go:89] found id: ""
	I0630 15:53:33.781983 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.781994 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:33.782006 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:33.782078 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:33.818201 1612198 cri.go:89] found id: ""
	I0630 15:53:33.818241 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.818254 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:33.818264 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:33.818336 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:33.865630 1612198 cri.go:89] found id: ""
	I0630 15:53:33.865767 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.865806 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:33.865851 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:33.865966 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:33.905740 1612198 cri.go:89] found id: ""
	I0630 15:53:33.905807 1612198 logs.go:282] 0 containers: []
	W0630 15:53:33.905821 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:33.905834 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:33.905852 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:33.978403 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:33.978451 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:34.000180 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:34.000225 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:34.077381 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:34.077433 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:34.077451 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:34.158516 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:34.158571 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:36.703046 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:36.725942 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:36.726033 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:36.769910 1612198 cri.go:89] found id: ""
	I0630 15:53:36.770040 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.770066 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:36.770075 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:36.770150 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:36.817303 1612198 cri.go:89] found id: ""
	I0630 15:53:36.817339 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.817350 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:36.817358 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:36.817442 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:36.852676 1612198 cri.go:89] found id: ""
	I0630 15:53:36.852721 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.852734 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:36.852743 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:36.852811 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:36.896796 1612198 cri.go:89] found id: ""
	I0630 15:53:36.896829 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.896840 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:36.896848 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:36.896929 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:36.932669 1612198 cri.go:89] found id: ""
	I0630 15:53:36.932708 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.932720 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:36.932729 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:36.932810 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:36.972728 1612198 cri.go:89] found id: ""
	I0630 15:53:36.972762 1612198 logs.go:282] 0 containers: []
	W0630 15:53:36.972773 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:36.972781 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:36.972855 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:37.009554 1612198 cri.go:89] found id: ""
	I0630 15:53:37.009594 1612198 logs.go:282] 0 containers: []
	W0630 15:53:37.009605 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:37.009614 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:37.009688 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:37.047124 1612198 cri.go:89] found id: ""
	I0630 15:53:37.047163 1612198 logs.go:282] 0 containers: []
	W0630 15:53:37.047175 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:37.047188 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:37.047204 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:37.110372 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:37.110427 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:37.127309 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:37.127352 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:37.196740 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:37.196770 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:37.196793 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:37.284276 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:37.284322 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:33.546215 1620744 out.go:235]   - Booting up control plane ...
	I0630 15:53:33.546374 1620744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:53:33.546471 1620744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:53:33.546551 1620744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:53:33.567048 1620744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:53:33.573691 1620744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:53:33.573744 1620744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:53:33.768543 1620744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 15:53:33.768723 1620744 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 15:53:34.769251 1620744 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001331666s
	I0630 15:53:34.771797 1620744 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 15:53:34.771934 1620744 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.72.11:8443/livez
	I0630 15:53:34.772075 1620744 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 15:53:34.772163 1620744 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 15:53:37.720863 1620744 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.949703734s
	I0630 15:53:38.248441 1620744 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.477609557s
	I0630 15:53:40.275015 1620744 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.504420612s
	I0630 15:53:40.295071 1620744 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 15:53:40.318773 1620744 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 15:53:40.357954 1620744 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 15:53:40.358269 1620744 kubeadm.go:310] [mark-control-plane] Marking the node bridge-668101 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 15:53:40.377248 1620744 kubeadm.go:310] [bootstrap-token] Using token: ay7ggg.v4lz4n8lgdcwzb1z
	I0630 15:53:35.480398 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:35.480445 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:35.480453 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:35.480460 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:35.480466 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:35.480472 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:35.480477 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:35.480481 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:35.480501 1619158 retry.go:31] will retry after 722.350023ms: missing components: kube-dns
	I0630 15:53:36.207319 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:36.207354 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:36.207360 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:36.207367 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:36.207372 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:36.207376 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:36.207379 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:36.207384 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:36.207401 1619158 retry.go:31] will retry after 1.469551324s: missing components: kube-dns
	I0630 15:53:37.682415 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:37.682461 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:37.682470 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:37.682479 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:37.682484 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:37.682491 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:37.682496 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:37.682501 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:37.682522 1619158 retry.go:31] will retry after 1.601843725s: missing components: kube-dns
	I0630 15:53:39.289676 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:39.289721 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:39.289731 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:39.289741 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:39.289748 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:39.289753 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:39.289759 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:39.289763 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:39.289786 1619158 retry.go:31] will retry after 1.660514017s: missing components: kube-dns
	I0630 15:53:40.379081 1620744 out.go:235]   - Configuring RBAC rules ...
	I0630 15:53:40.379262 1620744 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 15:53:40.390839 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 15:53:40.406448 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 15:53:40.414176 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 15:53:40.420005 1620744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 15:53:40.424273 1620744 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 15:53:40.682394 1620744 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 15:53:41.124826 1620744 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 15:53:41.682390 1620744 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 15:53:41.683365 1620744 kubeadm.go:310] 
	I0630 15:53:41.683473 1620744 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 15:53:41.683509 1620744 kubeadm.go:310] 
	I0630 15:53:41.683630 1620744 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 15:53:41.683647 1620744 kubeadm.go:310] 
	I0630 15:53:41.683685 1620744 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 15:53:41.683760 1620744 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 15:53:41.683843 1620744 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 15:53:41.683852 1620744 kubeadm.go:310] 
	I0630 15:53:41.683934 1620744 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 15:53:41.683943 1620744 kubeadm.go:310] 
	I0630 15:53:41.684007 1620744 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 15:53:41.684021 1620744 kubeadm.go:310] 
	I0630 15:53:41.684099 1620744 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 15:53:41.684203 1620744 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 15:53:41.684332 1620744 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 15:53:41.684349 1620744 kubeadm.go:310] 
	I0630 15:53:41.684477 1620744 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 15:53:41.684586 1620744 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 15:53:41.684594 1620744 kubeadm.go:310] 
	I0630 15:53:41.684715 1620744 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ay7ggg.v4lz4n8lgdcwzb1z \
	I0630 15:53:41.684897 1620744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 \
	I0630 15:53:41.684947 1620744 kubeadm.go:310] 	--control-plane 
	I0630 15:53:41.684960 1620744 kubeadm.go:310] 
	I0630 15:53:41.685080 1620744 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 15:53:41.685101 1620744 kubeadm.go:310] 
	I0630 15:53:41.685204 1620744 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ay7ggg.v4lz4n8lgdcwzb1z \
	I0630 15:53:41.685345 1620744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:991ce90cbe1973af010e6d69a602e0ccf3554f863d4d99d055ab77f76e65dac8 
	I0630 15:53:41.686851 1620744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:53:41.686884 1620744 cni.go:84] Creating CNI manager for "bridge"
	I0630 15:53:41.688726 1620744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 15:53:39.832609 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:39.849706 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:39.849794 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:39.893352 1612198 cri.go:89] found id: ""
	I0630 15:53:39.893391 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.893433 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:39.893442 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:39.893515 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:39.932840 1612198 cri.go:89] found id: ""
	I0630 15:53:39.932868 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.932876 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:39.932890 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:39.932955 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:39.981060 1612198 cri.go:89] found id: ""
	I0630 15:53:39.981097 1612198 logs.go:282] 0 containers: []
	W0630 15:53:39.981109 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:39.981117 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:39.981203 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:40.018727 1612198 cri.go:89] found id: ""
	I0630 15:53:40.018768 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.018781 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:40.018790 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:40.018863 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:40.061585 1612198 cri.go:89] found id: ""
	I0630 15:53:40.061627 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.061640 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:40.061649 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:40.061743 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:40.105417 1612198 cri.go:89] found id: ""
	I0630 15:53:40.105448 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.105456 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:40.105464 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:40.105527 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:40.141656 1612198 cri.go:89] found id: ""
	I0630 15:53:40.141686 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.141697 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:40.141705 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:40.141775 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:40.179978 1612198 cri.go:89] found id: ""
	I0630 15:53:40.180011 1612198 logs.go:282] 0 containers: []
	W0630 15:53:40.180020 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:40.180029 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:40.180042 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:40.197879 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:40.197924 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:40.271201 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:40.271257 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:40.271277 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:40.355166 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:40.355211 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:40.408985 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:40.409023 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:41.690209 1620744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 15:53:41.702679 1620744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 15:53:41.734200 1620744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 15:53:41.734327 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:41.734404 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-668101 minikube.k8s.io/updated_at=2025_06_30T15_53_41_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=bridge-668101 minikube.k8s.io/primary=true
	I0630 15:53:41.895628 1620744 ops.go:34] apiserver oom_adj: -16
	I0630 15:53:41.895917 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:42.396198 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:42.896761 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:40.954924 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:40.954967 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:40.954975 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:40.954985 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:40.954990 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:40.954996 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:40.955000 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:40.955005 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:40.955026 1619158 retry.go:31] will retry after 2.638740648s: missing components: kube-dns
	I0630 15:53:43.598079 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:43.598113 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:43.598119 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:43.598126 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:43.598130 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:43.598134 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:43.598137 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:43.598140 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:43.598162 1619158 retry.go:31] will retry after 3.489845888s: missing components: kube-dns
	I0630 15:53:43.396863 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:43.896228 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:44.396818 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:44.896130 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:45.396432 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:45.896985 1620744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 15:53:46.000729 1620744 kubeadm.go:1105] duration metric: took 4.266473213s to wait for elevateKubeSystemPrivileges
	I0630 15:53:46.000792 1620744 kubeadm.go:394] duration metric: took 16.495976664s to StartCluster
	I0630 15:53:46.000825 1620744 settings.go:142] acquiring lock: {Name:mka065f125c20a669403948a4a12d67af9cfaa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:46.000948 1620744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:53:46.002167 1620744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/kubeconfig: {Name:mk0514c04deec1224d3189194543d58a5d88a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 15:53:46.002462 1620744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 15:53:46.002466 1620744 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0630 15:53:46.002560 1620744 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0630 15:53:46.002667 1620744 addons.go:69] Setting storage-provisioner=true in profile "bridge-668101"
	I0630 15:53:46.002692 1620744 addons.go:238] Setting addon storage-provisioner=true in "bridge-668101"
	I0630 15:53:46.002713 1620744 addons.go:69] Setting default-storageclass=true in profile "bridge-668101"
	I0630 15:53:46.002742 1620744 host.go:66] Checking if "bridge-668101" exists ...
	I0630 15:53:46.002766 1620744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-668101"
	I0630 15:53:46.002725 1620744 config.go:182] Loaded profile config "bridge-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:53:46.003139 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.003182 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.003225 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.003269 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.004052 1620744 out.go:177] * Verifying Kubernetes components...
	I0630 15:53:46.005665 1620744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 15:53:46.020307 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0630 15:53:46.021011 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.021601 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.021625 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.021987 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.022574 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.022627 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.026416 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0630 15:53:46.027718 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.028783 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.028829 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.029604 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.029867 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:46.035884 1620744 addons.go:238] Setting addon default-storageclass=true in "bridge-668101"
	I0630 15:53:46.035944 1620744 host.go:66] Checking if "bridge-668101" exists ...
	I0630 15:53:46.036350 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.036409 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.039472 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
	I0630 15:53:46.040012 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.040664 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.040690 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.041066 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.041289 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:46.043282 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:46.045535 1620744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 15:53:42.967786 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:42.987531 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:42.987625 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:43.023328 1612198 cri.go:89] found id: ""
	I0630 15:53:43.023360 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.023370 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:43.023377 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:43.023449 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:43.059730 1612198 cri.go:89] found id: ""
	I0630 15:53:43.059774 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.059785 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:43.059793 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:43.059875 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:43.100987 1612198 cri.go:89] found id: ""
	I0630 15:53:43.101024 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.101036 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:43.101045 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:43.101118 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:43.139556 1612198 cri.go:89] found id: ""
	I0630 15:53:43.139591 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.139603 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:43.139611 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:43.139669 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:43.177647 1612198 cri.go:89] found id: ""
	I0630 15:53:43.177677 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.177686 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:43.177692 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:43.177749 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:43.214354 1612198 cri.go:89] found id: ""
	I0630 15:53:43.214388 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.214400 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:43.214407 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:43.214475 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:43.254332 1612198 cri.go:89] found id: ""
	I0630 15:53:43.254364 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.254376 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:43.254393 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:43.254459 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:43.292194 1612198 cri.go:89] found id: ""
	I0630 15:53:43.292224 1612198 logs.go:282] 0 containers: []
	W0630 15:53:43.292232 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:43.292243 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:43.292255 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:43.345690 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:43.345732 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:43.360155 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:43.360191 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:43.441505 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:43.441537 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:43.441554 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:43.527009 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:43.527063 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:46.069596 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:46.092563 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:46.092646 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:46.132093 1612198 cri.go:89] found id: ""
	I0630 15:53:46.132131 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.132144 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:46.132153 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:46.132225 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:46.175509 1612198 cri.go:89] found id: ""
	I0630 15:53:46.175544 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.175556 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:46.175565 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:46.175647 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:46.225442 1612198 cri.go:89] found id: ""
	I0630 15:53:46.225478 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.225490 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:46.225502 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:46.225573 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:46.275070 1612198 cri.go:89] found id: ""
	I0630 15:53:46.275109 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.275122 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:46.275131 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:46.275206 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:46.320084 1612198 cri.go:89] found id: ""
	I0630 15:53:46.320116 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.320126 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:46.320133 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:46.320198 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:46.360602 1612198 cri.go:89] found id: ""
	I0630 15:53:46.360682 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.360699 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:46.360711 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:46.360818 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:46.404187 1612198 cri.go:89] found id: ""
	I0630 15:53:46.404222 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.404231 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:46.404238 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:46.404304 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:46.457761 1612198 cri.go:89] found id: ""
	I0630 15:53:46.457803 1612198 logs.go:282] 0 containers: []
	W0630 15:53:46.457820 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:46.457835 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:46.457855 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:46.524526 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:46.524574 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:46.542938 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:46.542974 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:46.620336 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:46.620372 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:46.620386 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:46.706447 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:46.706496 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:46.047099 1620744 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:46.047127 1620744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 15:53:46.047171 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:46.051881 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.052589 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:46.052618 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.052990 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:46.053240 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:46.053473 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:46.053666 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:46.055796 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0630 15:53:46.056603 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.057196 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.057218 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.057663 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.058201 1620744 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:53:46.058252 1620744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:53:46.078886 1620744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
	I0630 15:53:46.079821 1620744 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:53:46.080456 1620744 main.go:141] libmachine: Using API Version  1
	I0630 15:53:46.080484 1620744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:53:46.080941 1620744 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:53:46.081233 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetState
	I0630 15:53:46.083743 1620744 main.go:141] libmachine: (bridge-668101) Calling .DriverName
	I0630 15:53:46.084008 1620744 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:46.084024 1620744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 15:53:46.084042 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHHostname
	I0630 15:53:46.088653 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.089277 1620744 main.go:141] libmachine: (bridge-668101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:66", ip: ""} in network mk-bridge-668101: {Iface:virbr4 ExpiryTime:2025-06-30 16:53:13 +0000 UTC Type:0 Mac:52:54:00:de:25:66 Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:bridge-668101 Clientid:01:52:54:00:de:25:66}
	I0630 15:53:46.089310 1620744 main.go:141] libmachine: (bridge-668101) DBG | domain bridge-668101 has defined IP address 192.168.72.11 and MAC address 52:54:00:de:25:66 in network mk-bridge-668101
	I0630 15:53:46.089516 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHPort
	I0630 15:53:46.089752 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHKeyPath
	I0630 15:53:46.090006 1620744 main.go:141] libmachine: (bridge-668101) Calling .GetSSHUsername
	I0630 15:53:46.090184 1620744 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/bridge-668101/id_rsa Username:docker}
	I0630 15:53:46.376641 1620744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 15:53:46.376679 1620744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 15:53:46.468914 1620744 node_ready.go:35] waiting up to 15m0s for node "bridge-668101" to be "Ready" ...
	I0630 15:53:46.483783 1620744 node_ready.go:49] node "bridge-668101" is "Ready"
	I0630 15:53:46.483830 1620744 node_ready.go:38] duration metric: took 14.870889ms for node "bridge-668101" to be "Ready" ...
	I0630 15:53:46.483849 1620744 api_server.go:52] waiting for apiserver process to appear ...
	I0630 15:53:46.483904 1620744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:46.639045 1620744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 15:53:46.707352 1620744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 15:53:47.223014 1620744 start.go:972] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0630 15:53:47.223081 1620744 api_server.go:72] duration metric: took 1.2205745s to wait for apiserver process to appear ...
	I0630 15:53:47.223099 1620744 api_server.go:88] waiting for apiserver healthz status ...
	I0630 15:53:47.223143 1620744 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I0630 15:53:47.223206 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.223233 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.223657 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.223694 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.223705 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.223713 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.223714 1620744 main.go:141] libmachine: (bridge-668101) DBG | Closing plugin on server side
	I0630 15:53:47.223963 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.224017 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.223999 1620744 main.go:141] libmachine: (bridge-668101) DBG | Closing plugin on server side
	I0630 15:53:47.242476 1620744 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I0630 15:53:47.244520 1620744 api_server.go:141] control plane version: v1.33.2
	I0630 15:53:47.244556 1620744 api_server.go:131] duration metric: took 21.449815ms to wait for apiserver health ...
	I0630 15:53:47.244567 1620744 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 15:53:47.260743 1620744 system_pods.go:59] 7 kube-system pods found
	I0630 15:53:47.260790 1620744 system_pods.go:61] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.260803 1620744 system_pods.go:61] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.260813 1620744 system_pods.go:61] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.260822 1620744 system_pods.go:61] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.260833 1620744 system_pods.go:61] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.260847 1620744 system_pods.go:61] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.260855 1620744 system_pods.go:61] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.260862 1620744 system_pods.go:74] duration metric: took 16.289084ms to wait for pod list to return data ...
	I0630 15:53:47.260873 1620744 default_sa.go:34] waiting for default service account to be created ...
	I0630 15:53:47.265456 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.265485 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.265804 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.265825 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.265828 1620744 main.go:141] libmachine: (bridge-668101) DBG | Closing plugin on server side
	I0630 15:53:47.273837 1620744 default_sa.go:45] found service account: "default"
	I0630 15:53:47.273880 1620744 default_sa.go:55] duration metric: took 12.997202ms for default service account to be created ...
	I0630 15:53:47.273895 1620744 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 15:53:47.345061 1620744 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:47.345113 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.345126 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.345134 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.345144 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.345154 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.345162 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.345175 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.345223 1620744 retry.go:31] will retry after 281.101886ms: missing components: kube-dns
	I0630 15:53:47.638563 1620744 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:47.638608 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.638620 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.638628 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.638637 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.638647 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.638656 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.638663 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.638680 1620744 retry.go:31] will retry after 257.359626ms: missing components: kube-dns
	I0630 15:53:47.705752 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.705779 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.706118 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.706145 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.706176 1620744 main.go:141] libmachine: Making call to close driver server
	I0630 15:53:47.706184 1620744 main.go:141] libmachine: (bridge-668101) Calling .Close
	I0630 15:53:47.706445 1620744 main.go:141] libmachine: Successfully made call to close driver server
	I0630 15:53:47.706459 1620744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 15:53:47.709137 1620744 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0630 15:53:47.710580 1620744 addons.go:514] duration metric: took 1.708021313s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0630 15:53:47.727425 1620744 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-668101" context rescaled to 1 replicas
	I0630 15:53:47.901617 1620744 system_pods.go:86] 8 kube-system pods found
	I0630 15:53:47.901662 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.901673 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:47.901680 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:47.901689 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:47.901699 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:47.901705 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:47.901716 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:47.901729 1620744 system_pods.go:89] "storage-provisioner" [d39eade7-d69c-4ba1-871c-9d22e90f3162] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:47.901756 1620744 retry.go:31] will retry after 361.046684ms: missing components: kube-dns
	I0630 15:53:47.092203 1619158 system_pods.go:86] 7 kube-system pods found
	I0630 15:53:47.092247 1619158 system_pods.go:89] "coredns-674b8bbfcf-zlnjm" [d457c381-4da7-4640-acf1-7864e77b7119] Running
	I0630 15:53:47.092256 1619158 system_pods.go:89] "etcd-flannel-668101" [0665a2ea-41f9-4556-8871-8e6ee5ce3bf0] Running
	I0630 15:53:47.092261 1619158 system_pods.go:89] "kube-apiserver-flannel-668101" [cfcd49c7-4901-44bc-93bb-353bb60e9e58] Running
	I0630 15:53:47.092266 1619158 system_pods.go:89] "kube-controller-manager-flannel-668101" [f8dac775-870f-4d19-8f3b-86c75fb12dd8] Running
	I0630 15:53:47.092272 1619158 system_pods.go:89] "kube-proxy-fl9rb" [e43f2d78-12eb-4010-ac56-97f2efdaef49] Running
	I0630 15:53:47.092279 1619158 system_pods.go:89] "kube-scheduler-flannel-668101" [72c9d243-dbb4-44a1-b16e-05616d5b4b56] Running
	I0630 15:53:47.092285 1619158 system_pods.go:89] "storage-provisioner" [c3ba76ba-9b62-41bb-9d1e-28c0779d6b32] Running
	I0630 15:53:47.092297 1619158 system_pods.go:126] duration metric: took 14.371230346s to wait for k8s-apps to be running ...
	I0630 15:53:47.092315 1619158 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 15:53:47.092395 1619158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:53:47.107330 1619158 system_svc.go:56] duration metric: took 14.999723ms WaitForService to wait for kubelet
	I0630 15:53:47.107386 1619158 kubeadm.go:578] duration metric: took 25.24951704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:53:47.107425 1619158 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:53:47.111477 1619158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:53:47.111513 1619158 node_conditions.go:123] node cpu capacity is 2
	I0630 15:53:47.111531 1619158 node_conditions.go:105] duration metric: took 4.099412ms to run NodePressure ...
	I0630 15:53:47.111548 1619158 start.go:241] waiting for startup goroutines ...
	I0630 15:53:47.111557 1619158 start.go:246] waiting for cluster config update ...
	I0630 15:53:47.111572 1619158 start.go:255] writing updated cluster config ...
	I0630 15:53:47.111942 1619158 ssh_runner.go:195] Run: rm -f paused
	I0630 15:53:47.118482 1619158 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:47.122226 1619158 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-zlnjm" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.126835 1619158 pod_ready.go:94] pod "coredns-674b8bbfcf-zlnjm" is "Ready"
	I0630 15:53:47.126873 1619158 pod_ready.go:86] duration metric: took 4.619265ms for pod "coredns-674b8bbfcf-zlnjm" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.129263 1619158 pod_ready.go:83] waiting for pod "etcd-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.133727 1619158 pod_ready.go:94] pod "etcd-flannel-668101" is "Ready"
	I0630 15:53:47.133762 1619158 pod_ready.go:86] duration metric: took 4.469718ms for pod "etcd-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.135699 1619158 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.140237 1619158 pod_ready.go:94] pod "kube-apiserver-flannel-668101" is "Ready"
	I0630 15:53:47.140273 1619158 pod_ready.go:86] duration metric: took 4.536145ms for pod "kube-apiserver-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.143805 1619158 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.524212 1619158 pod_ready.go:94] pod "kube-controller-manager-flannel-668101" is "Ready"
	I0630 15:53:47.524250 1619158 pod_ready.go:86] duration metric: took 380.412398ms for pod "kube-controller-manager-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:47.723808 1619158 pod_ready.go:83] waiting for pod "kube-proxy-fl9rb" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.122925 1619158 pod_ready.go:94] pod "kube-proxy-fl9rb" is "Ready"
	I0630 15:53:48.122960 1619158 pod_ready.go:86] duration metric: took 399.120603ms for pod "kube-proxy-fl9rb" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.323641 1619158 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.722788 1619158 pod_ready.go:94] pod "kube-scheduler-flannel-668101" is "Ready"
	I0630 15:53:48.722822 1619158 pod_ready.go:86] duration metric: took 399.155106ms for pod "kube-scheduler-flannel-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:48.722836 1619158 pod_ready.go:40] duration metric: took 1.604308968s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:48.771506 1619158 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:53:48.774098 1619158 out.go:177] * Done! kubectl is now configured to use "flannel-668101" cluster and "default" namespace by default
	I0630 15:53:49.256833 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:49.276256 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:49.276328 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:49.326292 1612198 cri.go:89] found id: ""
	I0630 15:53:49.326327 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.326339 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:49.326356 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:49.326427 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:49.371428 1612198 cri.go:89] found id: ""
	I0630 15:53:49.371486 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.371496 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:49.371503 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:49.371568 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:49.415763 1612198 cri.go:89] found id: ""
	I0630 15:53:49.415840 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.415855 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:49.415864 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:49.415927 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:49.456276 1612198 cri.go:89] found id: ""
	I0630 15:53:49.456313 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.456324 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:49.456332 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:49.456421 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:49.496696 1612198 cri.go:89] found id: ""
	I0630 15:53:49.496735 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.496753 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:49.496762 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:49.496819 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:49.537728 1612198 cri.go:89] found id: ""
	I0630 15:53:49.537763 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.537771 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:49.537778 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:49.537837 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:49.575693 1612198 cri.go:89] found id: ""
	I0630 15:53:49.575725 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.575734 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:49.575740 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:49.575795 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:49.617896 1612198 cri.go:89] found id: ""
	I0630 15:53:49.617931 1612198 logs.go:282] 0 containers: []
	W0630 15:53:49.617941 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:49.617967 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:49.617986 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:49.668327 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:49.668372 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:49.721223 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:49.721270 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:49.737061 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:49.737094 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:49.814464 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:49.814490 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:49.814503 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:52.393329 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:52.409925 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:52.410010 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:52.446622 1612198 cri.go:89] found id: ""
	I0630 15:53:52.446659 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.446673 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:52.446684 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:52.446769 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:52.493894 1612198 cri.go:89] found id: ""
	I0630 15:53:52.493929 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.493940 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:52.493947 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:52.494012 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:52.530891 1612198 cri.go:89] found id: ""
	I0630 15:53:52.530943 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.530956 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:52.530965 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:52.531141 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:52.569016 1612198 cri.go:89] found id: ""
	I0630 15:53:52.569046 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.569054 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:52.569068 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:52.569144 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:52.607137 1612198 cri.go:89] found id: ""
	I0630 15:53:52.607176 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.607186 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:52.607194 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:52.607264 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:52.655286 1612198 cri.go:89] found id: ""
	I0630 15:53:52.655334 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.655343 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:52.655350 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:52.655420 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:48.266876 1620744 system_pods.go:86] 8 kube-system pods found
	I0630 15:53:48.266910 1620744 system_pods.go:89] "coredns-674b8bbfcf-hggsr" [23d55357-057a-40e9-8e04-15d6969956f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:48.266917 1620744 system_pods.go:89] "coredns-674b8bbfcf-qt9bv" [e6b1fda6-656a-4b2e-83bf-7ba172a51e6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0630 15:53:48.266923 1620744 system_pods.go:89] "etcd-bridge-668101" [e1bf0d53-52f0-4220-bbfb-1eeb9c30bffd] Running
	I0630 15:53:48.266928 1620744 system_pods.go:89] "kube-apiserver-bridge-668101" [cc2997b6-5a09-46c9-b7a9-c0cc8e16c9ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0630 15:53:48.266936 1620744 system_pods.go:89] "kube-controller-manager-bridge-668101" [3195588d-e746-4e60-85f8-00616e95efac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0630 15:53:48.266940 1620744 system_pods.go:89] "kube-proxy-q2tjj" [774952ca-bf08-420f-9272-88bfb72b445a] Running
	I0630 15:53:48.266944 1620744 system_pods.go:89] "kube-scheduler-bridge-668101" [e22bffdd-088c-4e05-b030-f3922a56f418] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0630 15:53:48.266949 1620744 system_pods.go:89] "storage-provisioner" [d39eade7-d69c-4ba1-871c-9d22e90f3162] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0630 15:53:48.266959 1620744 system_pods.go:126] duration metric: took 993.056385ms to wait for k8s-apps to be running ...
	I0630 15:53:48.266967 1620744 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 15:53:48.267016 1620744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:53:48.282778 1620744 system_svc.go:56] duration metric: took 15.79609ms WaitForService to wait for kubelet
	I0630 15:53:48.282832 1620744 kubeadm.go:578] duration metric: took 2.28032496s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 15:53:48.282860 1620744 node_conditions.go:102] verifying NodePressure condition ...
	I0630 15:53:48.286721 1620744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 15:53:48.286750 1620744 node_conditions.go:123] node cpu capacity is 2
	I0630 15:53:48.286764 1620744 node_conditions.go:105] duration metric: took 3.897099ms to run NodePressure ...
	I0630 15:53:48.286777 1620744 start.go:241] waiting for startup goroutines ...
	I0630 15:53:48.286784 1620744 start.go:246] waiting for cluster config update ...
	I0630 15:53:48.286794 1620744 start.go:255] writing updated cluster config ...
	I0630 15:53:48.287052 1620744 ssh_runner.go:195] Run: rm -f paused
	I0630 15:53:48.292293 1620744 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:48.297080 1620744 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-hggsr" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:53:50.309473 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-hggsr" is not "Ready", error: <nil>
	W0630 15:53:52.803327 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-hggsr" is not "Ready", error: <nil>
	I0630 15:53:52.693017 1612198 cri.go:89] found id: ""
	I0630 15:53:52.693053 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.693066 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:52.693093 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:52.693156 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:52.729639 1612198 cri.go:89] found id: ""
	I0630 15:53:52.729674 1612198 logs.go:282] 0 containers: []
	W0630 15:53:52.729685 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:52.729713 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:52.729731 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:52.744808 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:52.744846 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:52.818006 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:52.818076 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:52.818095 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:52.913720 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:52.913794 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:52.955851 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:52.955898 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:55.506514 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:55.523943 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:55.524024 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:55.562846 1612198 cri.go:89] found id: ""
	I0630 15:53:55.562884 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.562893 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:55.562900 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:55.562960 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:55.601862 1612198 cri.go:89] found id: ""
	I0630 15:53:55.601895 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.601907 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:55.601915 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:55.601988 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:55.650904 1612198 cri.go:89] found id: ""
	I0630 15:53:55.650946 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.650958 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:55.650968 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:55.651051 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:55.695050 1612198 cri.go:89] found id: ""
	I0630 15:53:55.695081 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.695089 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:55.695096 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:55.695167 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:55.732863 1612198 cri.go:89] found id: ""
	I0630 15:53:55.732904 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.732917 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:55.732925 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:55.732997 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:55.772221 1612198 cri.go:89] found id: ""
	I0630 15:53:55.772254 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.772265 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:55.772275 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:55.772349 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:55.811091 1612198 cri.go:89] found id: ""
	I0630 15:53:55.811134 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.811146 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:55.811154 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:55.811213 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:55.846273 1612198 cri.go:89] found id: ""
	I0630 15:53:55.846313 1612198 logs.go:282] 0 containers: []
	W0630 15:53:55.846338 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:55.846352 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:55.846370 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:55.921797 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:55.921845 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:53:55.963517 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:55.963553 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:56.023942 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:56.023988 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:56.038647 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:56.038687 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:56.119572 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0630 15:53:55.303307 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-hggsr" is not "Ready", error: <nil>
	I0630 15:53:55.805200 1620744 pod_ready.go:94] pod "coredns-674b8bbfcf-hggsr" is "Ready"
	I0630 15:53:55.805235 1620744 pod_ready.go:86] duration metric: took 7.508115108s for pod "coredns-674b8bbfcf-hggsr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:55.805249 1620744 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace to be "Ready" or be gone ...
	W0630 15:53:57.811769 1620744 pod_ready.go:104] pod "coredns-674b8bbfcf-qt9bv" is not "Ready", error: <nil>
	I0630 15:53:58.309220 1620744 pod_ready.go:99] pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace is gone: getting pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace (will retry): pods "coredns-674b8bbfcf-qt9bv" not found
	I0630 15:53:58.309253 1620744 pod_ready.go:86] duration metric: took 2.5039962s for pod "coredns-674b8bbfcf-qt9bv" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.311407 1620744 pod_ready.go:83] waiting for pod "etcd-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.315815 1620744 pod_ready.go:94] pod "etcd-bridge-668101" is "Ready"
	I0630 15:53:58.315845 1620744 pod_ready.go:86] duration metric: took 4.413088ms for pod "etcd-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.317890 1620744 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.321951 1620744 pod_ready.go:94] pod "kube-apiserver-bridge-668101" is "Ready"
	I0630 15:53:58.322004 1620744 pod_ready.go:86] duration metric: took 4.070763ms for pod "kube-apiserver-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.325941 1620744 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.330240 1620744 pod_ready.go:94] pod "kube-controller-manager-bridge-668101" is "Ready"
	I0630 15:53:58.330273 1620744 pod_ready.go:86] duration metric: took 4.307436ms for pod "kube-controller-manager-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.509388 1620744 pod_ready.go:83] waiting for pod "kube-proxy-q2tjj" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:58.911133 1620744 pod_ready.go:94] pod "kube-proxy-q2tjj" is "Ready"
	I0630 15:53:58.911181 1620744 pod_ready.go:86] duration metric: took 401.753348ms for pod "kube-proxy-q2tjj" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:59.110354 1620744 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:59.509728 1620744 pod_ready.go:94] pod "kube-scheduler-bridge-668101" is "Ready"
	I0630 15:53:59.509764 1620744 pod_ready.go:86] duration metric: took 399.372679ms for pod "kube-scheduler-bridge-668101" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 15:53:59.509778 1620744 pod_ready.go:40] duration metric: took 11.217429269s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 15:53:59.557513 1620744 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 15:53:59.559079 1620744 out.go:177] * Done! kubectl is now configured to use "bridge-668101" cluster and "default" namespace by default
	I0630 15:53:58.620232 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:53:58.638119 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:53:58.638194 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:53:58.674101 1612198 cri.go:89] found id: ""
	I0630 15:53:58.674160 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.674175 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:53:58.674184 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:53:58.674259 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:53:58.712115 1612198 cri.go:89] found id: ""
	I0630 15:53:58.712167 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.712179 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:53:58.712192 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:53:58.712261 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:53:58.766961 1612198 cri.go:89] found id: ""
	I0630 15:53:58.767004 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.767016 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:53:58.767025 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:53:58.767114 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:53:58.817233 1612198 cri.go:89] found id: ""
	I0630 15:53:58.817274 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.817286 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:53:58.817297 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:53:58.817379 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:53:58.858728 1612198 cri.go:89] found id: ""
	I0630 15:53:58.858757 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.858774 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:53:58.858784 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:53:58.858842 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:53:58.900041 1612198 cri.go:89] found id: ""
	I0630 15:53:58.900082 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.900094 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:53:58.900102 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:53:58.900176 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:53:58.944995 1612198 cri.go:89] found id: ""
	I0630 15:53:58.945026 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.945037 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:53:58.945046 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:53:58.945110 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:53:58.987156 1612198 cri.go:89] found id: ""
	I0630 15:53:58.987204 1612198 logs.go:282] 0 containers: []
	W0630 15:53:58.987216 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:53:58.987233 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:53:58.987252 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:53:59.054774 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:53:59.054821 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:53:59.071556 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:53:59.071601 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:53:59.144600 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:53:59.144631 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:53:59.144644 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:53:59.218471 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:53:59.218519 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:01.761632 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:01.781793 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:01.781885 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:01.834337 1612198 cri.go:89] found id: ""
	I0630 15:54:01.834370 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.834381 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:01.834390 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:01.834456 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:01.879488 1612198 cri.go:89] found id: ""
	I0630 15:54:01.879528 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.879542 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:01.879552 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:01.879629 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:01.919612 1612198 cri.go:89] found id: ""
	I0630 15:54:01.919656 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.919671 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:01.919681 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:01.919755 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:01.959025 1612198 cri.go:89] found id: ""
	I0630 15:54:01.959108 1612198 logs.go:282] 0 containers: []
	W0630 15:54:01.959118 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:01.959126 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:01.959213 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:02.004157 1612198 cri.go:89] found id: ""
	I0630 15:54:02.004193 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.004207 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:02.004216 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:02.004293 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:02.041453 1612198 cri.go:89] found id: ""
	I0630 15:54:02.041488 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.041496 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:02.041503 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:02.041573 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:02.092760 1612198 cri.go:89] found id: ""
	I0630 15:54:02.092801 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.092814 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:02.092824 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:02.092894 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:02.130937 1612198 cri.go:89] found id: ""
	I0630 15:54:02.130976 1612198 logs.go:282] 0 containers: []
	W0630 15:54:02.130985 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:02.130996 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:02.131076 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:02.186285 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:02.186333 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:02.203252 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:02.203283 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:02.274788 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:02.274820 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:02.274836 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:02.354791 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:02.354835 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:04.902714 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:04.922560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:04.922631 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:04.961257 1612198 cri.go:89] found id: ""
	I0630 15:54:04.961291 1612198 logs.go:282] 0 containers: []
	W0630 15:54:04.961302 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:04.961312 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:04.961388 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:04.997894 1612198 cri.go:89] found id: ""
	I0630 15:54:04.997927 1612198 logs.go:282] 0 containers: []
	W0630 15:54:04.997936 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:04.997942 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:04.998007 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:05.038875 1612198 cri.go:89] found id: ""
	I0630 15:54:05.038923 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.038936 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:05.038945 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:05.039035 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:05.080082 1612198 cri.go:89] found id: ""
	I0630 15:54:05.080123 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.080135 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:05.080145 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:05.080205 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:05.117322 1612198 cri.go:89] found id: ""
	I0630 15:54:05.117358 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.117371 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:05.117378 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:05.117469 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:05.172542 1612198 cri.go:89] found id: ""
	I0630 15:54:05.172578 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.172589 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:05.172598 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:05.172666 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:05.220246 1612198 cri.go:89] found id: ""
	I0630 15:54:05.220280 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.220291 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:05.220299 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:05.220365 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:05.279486 1612198 cri.go:89] found id: ""
	I0630 15:54:05.279521 1612198 logs.go:282] 0 containers: []
	W0630 15:54:05.279533 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:05.279548 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:05.279564 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:05.341677 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:05.341734 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:05.359513 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:05.359566 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:05.445100 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:05.445128 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:05.445144 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:05.552812 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:05.552883 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:08.098433 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:08.115865 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:08.115985 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:08.155035 1612198 cri.go:89] found id: ""
	I0630 15:54:08.155077 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.155092 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:08.155103 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:08.155173 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:08.192666 1612198 cri.go:89] found id: ""
	I0630 15:54:08.192702 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.192711 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:08.192719 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:08.192791 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:08.234681 1612198 cri.go:89] found id: ""
	I0630 15:54:08.234710 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.234718 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:08.234723 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:08.234782 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:08.271666 1612198 cri.go:89] found id: ""
	I0630 15:54:08.271699 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.271707 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:08.271714 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:08.271769 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:08.309335 1612198 cri.go:89] found id: ""
	I0630 15:54:08.309366 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.309375 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:08.309381 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:08.309471 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:08.351248 1612198 cri.go:89] found id: ""
	I0630 15:54:08.351284 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.351296 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:08.351305 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:08.351384 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:08.386803 1612198 cri.go:89] found id: ""
	I0630 15:54:08.386833 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.386843 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:08.386851 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:08.386922 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:08.434407 1612198 cri.go:89] found id: ""
	I0630 15:54:08.434442 1612198 logs.go:282] 0 containers: []
	W0630 15:54:08.434451 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:08.434461 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:08.434474 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:08.510981 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:08.511009 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:08.511028 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:08.590361 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:08.590426 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:08.634603 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:08.634636 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:08.687291 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:08.687339 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:11.202732 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:11.228516 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:54:11.228589 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:54:11.307836 1612198 cri.go:89] found id: ""
	I0630 15:54:11.307870 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.307882 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:54:11.307890 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:54:11.307973 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:54:11.359347 1612198 cri.go:89] found id: ""
	I0630 15:54:11.359380 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.359400 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:54:11.359408 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:54:11.359467 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:54:11.414423 1612198 cri.go:89] found id: ""
	I0630 15:54:11.414469 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.414479 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:54:11.414486 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:54:11.414549 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:54:11.457669 1612198 cri.go:89] found id: ""
	I0630 15:54:11.457704 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.457722 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:54:11.457735 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:54:11.457804 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:54:11.511061 1612198 cri.go:89] found id: ""
	I0630 15:54:11.511131 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.511147 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:54:11.511159 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:54:11.511345 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:54:11.557886 1612198 cri.go:89] found id: ""
	I0630 15:54:11.557923 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.557936 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:54:11.557946 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:54:11.558014 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:54:11.603894 1612198 cri.go:89] found id: ""
	I0630 15:54:11.603926 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.603938 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:54:11.603946 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:54:11.604016 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:54:11.652115 1612198 cri.go:89] found id: ""
	I0630 15:54:11.652147 1612198 logs.go:282] 0 containers: []
	W0630 15:54:11.652156 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:54:11.652165 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:54:11.652177 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0630 15:54:11.700550 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:54:11.700588 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:54:11.761044 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:54:11.761088 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:54:11.779581 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:54:11.779669 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:54:11.872983 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:54:11.873013 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:54:11.873040 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:54:14.469180 1612198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:54:14.488438 1612198 kubeadm.go:593] duration metric: took 4m4.858627578s to restartPrimaryControlPlane
	W0630 15:54:14.488521 1612198 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0630 15:54:14.488557 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:54:16.362367 1612198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.873774715s)
	I0630 15:54:16.362472 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:54:16.381754 1612198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 15:54:16.394832 1612198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:54:16.407997 1612198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:54:16.408022 1612198 kubeadm.go:157] found existing configuration files:
	
	I0630 15:54:16.408088 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:54:16.420299 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:54:16.420374 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:54:16.432689 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:54:16.450141 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:54:16.450232 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:54:16.466230 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:54:16.478725 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:54:16.478810 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:54:16.491926 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:54:16.503661 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:54:16.503754 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:54:16.516000 1612198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:54:16.604779 1612198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:54:16.604866 1612198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:54:16.771725 1612198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:54:16.771885 1612198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:54:16.772009 1612198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:54:17.000568 1612198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:54:17.002768 1612198 out.go:235]   - Generating certificates and keys ...
	I0630 15:54:17.007633 1612198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:54:17.007744 1612198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:54:17.007835 1612198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:54:17.007906 1612198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:54:17.007987 1612198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:54:17.008050 1612198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:54:17.008130 1612198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:54:17.008216 1612198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:54:17.008304 1612198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:54:17.008429 1612198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:54:17.008479 1612198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:54:17.008545 1612198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:54:17.091062 1612198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:54:17.216540 1612198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:54:17.314609 1612198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:54:17.399588 1612198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:54:17.417749 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:54:17.418852 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:54:17.418923 1612198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:54:17.631341 1612198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:54:17.633197 1612198 out.go:235]   - Booting up control plane ...
	I0630 15:54:17.633340 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:54:17.639557 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:54:17.642269 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:54:17.646155 1612198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:54:17.647610 1612198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:54:57.647972 1612198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:54:57.648456 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:54:57.648704 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:02.649537 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:02.649775 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:12.650265 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:12.650526 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:55:32.650986 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:55:32.651250 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:12.652241 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:12.652569 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:12.652621 1612198 kubeadm.go:310] 
	I0630 15:56:12.652681 1612198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:56:12.652741 1612198 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:56:12.652751 1612198 kubeadm.go:310] 
	I0630 15:56:12.652778 1612198 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:56:12.652814 1612198 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:56:12.652960 1612198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:56:12.652983 1612198 kubeadm.go:310] 
	I0630 15:56:12.653129 1612198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:56:12.653192 1612198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:56:12.653257 1612198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:56:12.653270 1612198 kubeadm.go:310] 
	I0630 15:56:12.653457 1612198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:56:12.653585 1612198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:56:12.653603 1612198 kubeadm.go:310] 
	I0630 15:56:12.653767 1612198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:56:12.653893 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:56:12.654008 1612198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:56:12.654137 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:56:12.654157 1612198 kubeadm.go:310] 
	I0630 15:56:12.655912 1612198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:56:12.655994 1612198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:56:12.656047 1612198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0630 15:56:12.656312 1612198 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0630 15:56:12.656390 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0630 15:56:13.118145 1612198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:56:13.137252 1612198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 15:56:13.148791 1612198 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 15:56:13.148814 1612198 kubeadm.go:157] found existing configuration files:
	
	I0630 15:56:13.148866 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 15:56:13.159734 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 15:56:13.159815 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 15:56:13.170810 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 15:56:13.181716 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 15:56:13.181794 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 15:56:13.193772 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 15:56:13.204825 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 15:56:13.204895 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 15:56:13.216418 1612198 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 15:56:13.227545 1612198 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 15:56:13.227620 1612198 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 15:56:13.239663 1612198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 15:56:13.314550 1612198 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0630 15:56:13.314640 1612198 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 15:56:13.462367 1612198 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 15:56:13.462550 1612198 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 15:56:13.462695 1612198 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0630 15:56:13.649387 1612198 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 15:56:13.651840 1612198 out.go:235]   - Generating certificates and keys ...
	I0630 15:56:13.651943 1612198 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 15:56:13.652047 1612198 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 15:56:13.652179 1612198 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0630 15:56:13.652262 1612198 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0630 15:56:13.652381 1612198 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0630 15:56:13.652486 1612198 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0630 15:56:13.652658 1612198 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0630 15:56:13.652726 1612198 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0630 15:56:13.652788 1612198 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0630 15:56:13.652876 1612198 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0630 15:56:13.652930 1612198 kubeadm.go:310] [certs] Using the existing "sa" key
	I0630 15:56:13.653009 1612198 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 15:56:13.920791 1612198 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 15:56:14.049695 1612198 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 15:56:14.213882 1612198 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 15:56:14.469969 1612198 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 15:56:14.493927 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 15:56:14.496121 1612198 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 15:56:14.496179 1612198 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 15:56:14.667471 1612198 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 15:56:14.669824 1612198 out.go:235]   - Booting up control plane ...
	I0630 15:56:14.670005 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 15:56:14.673040 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 15:56:14.674211 1612198 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 15:56:14.675608 1612198 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 15:56:14.680984 1612198 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0630 15:56:54.682952 1612198 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0630 15:56:54.683551 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:54.683769 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:56:59.684143 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:56:59.684406 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:57:09.685091 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:57:09.685374 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:57:29.686408 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:57:29.686681 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:58:09.688249 1612198 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0630 15:58:09.688537 1612198 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0630 15:58:09.688564 1612198 kubeadm.go:310] 
	I0630 15:58:09.688620 1612198 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0630 15:58:09.688672 1612198 kubeadm.go:310] 		timed out waiting for the condition
	I0630 15:58:09.688681 1612198 kubeadm.go:310] 
	I0630 15:58:09.688721 1612198 kubeadm.go:310] 	This error is likely caused by:
	I0630 15:58:09.688774 1612198 kubeadm.go:310] 		- The kubelet is not running
	I0630 15:58:09.688912 1612198 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0630 15:58:09.688921 1612198 kubeadm.go:310] 
	I0630 15:58:09.689114 1612198 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0630 15:58:09.689178 1612198 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0630 15:58:09.689250 1612198 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0630 15:58:09.689265 1612198 kubeadm.go:310] 
	I0630 15:58:09.689442 1612198 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0630 15:58:09.689568 1612198 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0630 15:58:09.689580 1612198 kubeadm.go:310] 
	I0630 15:58:09.689730 1612198 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0630 15:58:09.689812 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0630 15:58:09.689888 1612198 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0630 15:58:09.689950 1612198 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0630 15:58:09.689957 1612198 kubeadm.go:310] 
	I0630 15:58:09.692282 1612198 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 15:58:09.692363 1612198 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0630 15:58:09.692431 1612198 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0630 15:58:09.692497 1612198 kubeadm.go:394] duration metric: took 8m0.118278148s to StartCluster
	I0630 15:58:09.692554 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0630 15:58:09.692626 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0630 15:58:09.732128 1612198 cri.go:89] found id: ""
	I0630 15:58:09.732169 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.732178 1612198 logs.go:284] No container was found matching "kube-apiserver"
	I0630 15:58:09.732185 1612198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0630 15:58:09.732247 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0630 15:58:09.764993 1612198 cri.go:89] found id: ""
	I0630 15:58:09.765024 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.765034 1612198 logs.go:284] No container was found matching "etcd"
	I0630 15:58:09.765042 1612198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0630 15:58:09.765112 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0630 15:58:09.800767 1612198 cri.go:89] found id: ""
	I0630 15:58:09.800809 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.800820 1612198 logs.go:284] No container was found matching "coredns"
	I0630 15:58:09.800828 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0630 15:58:09.800888 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0630 15:58:09.834514 1612198 cri.go:89] found id: ""
	I0630 15:58:09.834544 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.834553 1612198 logs.go:284] No container was found matching "kube-scheduler"
	I0630 15:58:09.834560 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0630 15:58:09.834636 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0630 15:58:09.867918 1612198 cri.go:89] found id: ""
	I0630 15:58:09.867946 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.867955 1612198 logs.go:284] No container was found matching "kube-proxy"
	I0630 15:58:09.867962 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0630 15:58:09.868016 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0630 15:58:09.908166 1612198 cri.go:89] found id: ""
	I0630 15:58:09.908199 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.908208 1612198 logs.go:284] No container was found matching "kube-controller-manager"
	I0630 15:58:09.908215 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0630 15:58:09.908275 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0630 15:58:09.941613 1612198 cri.go:89] found id: ""
	I0630 15:58:09.941649 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.941658 1612198 logs.go:284] No container was found matching "kindnet"
	I0630 15:58:09.941665 1612198 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0630 15:58:09.941721 1612198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0630 15:58:09.983579 1612198 cri.go:89] found id: ""
	I0630 15:58:09.983617 1612198 logs.go:282] 0 containers: []
	W0630 15:58:09.983626 1612198 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0630 15:58:09.983637 1612198 logs.go:123] Gathering logs for kubelet ...
	I0630 15:58:09.983652 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0630 15:58:10.041447 1612198 logs.go:123] Gathering logs for dmesg ...
	I0630 15:58:10.041506 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0630 15:58:10.055597 1612198 logs.go:123] Gathering logs for describe nodes ...
	I0630 15:58:10.055633 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0630 15:58:10.125308 1612198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0630 15:58:10.125345 1612198 logs.go:123] Gathering logs for CRI-O ...
	I0630 15:58:10.125363 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0630 15:58:10.231871 1612198 logs.go:123] Gathering logs for container status ...
	I0630 15:58:10.231919 1612198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0630 15:58:10.270513 1612198 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0630 15:58:10.270594 1612198 out.go:270] * 
	W0630 15:58:10.270682 1612198 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:58:10.270703 1612198 out.go:270] * 
	W0630 15:58:10.272423 1612198 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0630 15:58:10.276013 1612198 out.go:201] 
	W0630 15:58:10.277283 1612198 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0630 15:58:10.277328 1612198 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0630 15:58:10.277358 1612198 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0630 15:58:10.279010 1612198 out.go:201] 
	
	
	==> CRI-O <==
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.905264977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299986905245182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5525ffe8-f98e-45d0-b9f1-f0d3794e46ac name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.906054923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e1cfc9a-34a9-40a1-8115-07905b7fcd89 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.906101873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e1cfc9a-34a9-40a1-8115-07905b7fcd89 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.906131562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8e1cfc9a-34a9-40a1-8115-07905b7fcd89 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.948409855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a27664d4-56de-4cc4-a8be-5ea685826bb9 name=/runtime.v1.RuntimeService/Version
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.948479801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a27664d4-56de-4cc4-a8be-5ea685826bb9 name=/runtime.v1.RuntimeService/Version
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.954230864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=935294d6-3b62-4c11-89fa-f31cb600e597 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.954600834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299986954572071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=935294d6-3b62-4c11-89fa-f31cb600e597 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.955216188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dc4db5a-ad2b-4603-bb58-db526d5fdbff name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.955265496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dc4db5a-ad2b-4603-bb58-db526d5fdbff name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.955294841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9dc4db5a-ad2b-4603-bb58-db526d5fdbff name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.995882269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc5f56b0-81f9-4b6d-bed8-ebc8dd2d444d name=/runtime.v1.RuntimeService/Version
	Jun 30 16:13:06 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:06.995948910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc5f56b0-81f9-4b6d-bed8-ebc8dd2d444d name=/runtime.v1.RuntimeService/Version
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.000101604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c556212a-519c-4366-9310-dc10e3b81315 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.000502497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299987000480083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c556212a-519c-4366-9310-dc10e3b81315 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.001226810Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4ad67d9-c4bc-4f9e-8a5f-e974f90ddaf6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.001273784Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4ad67d9-c4bc-4f9e-8a5f-e974f90ddaf6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.001302773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a4ad67d9-c4bc-4f9e-8a5f-e974f90ddaf6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.037880511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84de2754-aa5c-47ce-9b51-34e3b6803a12 name=/runtime.v1.RuntimeService/Version
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.037948376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84de2754-aa5c-47ce-9b51-34e3b6803a12 name=/runtime.v1.RuntimeService/Version
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.039313932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f96f471d-f00d-4f8f-b145-3697f39ef578 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.039686258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1751299987039662978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f96f471d-f00d-4f8f-b145-3697f39ef578 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.040338440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0199da4-3c6a-47c0-9b29-e086bc87f9ae name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.040381561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0199da4-3c6a-47c0-9b29-e086bc87f9ae name=/runtime.v1.RuntimeService/ListContainers
	Jun 30 16:13:07 old-k8s-version-836310 crio[829]: time="2025-06-30 16:13:07.040418060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c0199da4-3c6a-47c0-9b29-e086bc87f9ae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun30 15:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.000003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.001300] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004063] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.051769] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun30 15:50] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108667] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.116066] kauditd_printk_skb: 46 callbacks suppressed
	[Jun30 15:56] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 16:13:07 up 23 min,  0 users,  load average: 0.00, 0.00, 0.00
	Linux old-k8s-version-836310 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000aa1270, 0xc000ab5b80)
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]: goroutine 122 [runnable]:
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000bd4370, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000e04780, 0x0, 0x0)
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0006d8540)
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jun 30 16:13:06 old-k8s-version-836310 kubelet[8619]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jun 30 16:13:06 old-k8s-version-836310 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 30 16:13:06 old-k8s-version-836310 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 30 16:13:06 old-k8s-version-836310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 176.
	Jun 30 16:13:06 old-k8s-version-836310 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 30 16:13:07 old-k8s-version-836310 kubelet[8685]: I0630 16:13:07.006594    8685 server.go:416] Version: v1.20.0
	Jun 30 16:13:07 old-k8s-version-836310 kubelet[8685]: I0630 16:13:07.007387    8685 server.go:837] Client rotation is on, will bootstrap in background
	Jun 30 16:13:07 old-k8s-version-836310 kubelet[8685]: I0630 16:13:07.009749    8685 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 30 16:13:07 old-k8s-version-836310 kubelet[8685]: W0630 16:13:07.010732    8685 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 30 16:13:07 old-k8s-version-836310 kubelet[8685]: I0630 16:13:07.010794    8685 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 2 (251.160874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-836310" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (353.63s)

                                                
                                    

Test pass (263/322)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.75
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.16
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.33.2/json-events 14.35
13 TestDownloadOnly/v1.33.2/preload-exists 0
17 TestDownloadOnly/v1.33.2/LogsDuration 0.07
18 TestDownloadOnly/v1.33.2/DeleteAll 0.16
19 TestDownloadOnly/v1.33.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.69
22 TestOffline 89.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 421.37
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.54
36 TestAddons/parallel/RegistryCreds 0.76
38 TestAddons/parallel/InspektorGadget 11.82
39 TestAddons/parallel/MetricsServer 6.8
42 TestAddons/parallel/Headlamp 87.98
43 TestAddons/parallel/CloudSpanner 5.64
45 TestAddons/parallel/NvidiaDevicePlugin 6.53
48 TestAddons/StoppedEnableDisable 91.2
49 TestCertOptions 59.34
50 TestCertExpiration 311.29
52 TestForceSystemdFlag 80.89
53 TestForceSystemdEnv 65.78
55 TestKVMDriverInstallOrUpdate 4.61
59 TestErrorSpam/setup 48.23
60 TestErrorSpam/start 0.4
61 TestErrorSpam/status 0.81
62 TestErrorSpam/pause 1.87
63 TestErrorSpam/unpause 1.91
64 TestErrorSpam/stop 5.68
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 53.59
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 32.55
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.83
76 TestFunctional/serial/CacheCmd/cache/add_local 2.19
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.81
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 33.48
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.45
87 TestFunctional/serial/LogsFileCmd 1.49
88 TestFunctional/serial/InvalidService 4.78
90 TestFunctional/parallel/ConfigCmd 0.39
92 TestFunctional/parallel/DryRun 0.31
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.83
98 TestFunctional/parallel/ServiceCmdConnect 41.5
99 TestFunctional/parallel/AddonsCmd 0.19
102 TestFunctional/parallel/SSHCmd 0.47
103 TestFunctional/parallel/CpCmd 1.4
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.42
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
114 TestFunctional/parallel/License 0.59
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
118 TestFunctional/parallel/Version/short 0.05
119 TestFunctional/parallel/Version/components 0.48
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
124 TestFunctional/parallel/ImageCommands/ImageBuild 4.01
125 TestFunctional/parallel/ImageCommands/Setup 1.76
135 TestFunctional/parallel/MountCmd/any-port 41.58
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.69
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.8
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
143 TestFunctional/parallel/ServiceCmd/DeployApp 37.19
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
146 TestFunctional/parallel/ProfileCmd/profile_list 0.36
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
148 TestFunctional/parallel/ServiceCmd/List 0.45
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
151 TestFunctional/parallel/ServiceCmd/Format 0.3
152 TestFunctional/parallel/ServiceCmd/URL 0.31
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.29
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 222.4
162 TestMultiControlPlane/serial/DeployApp 7.11
163 TestMultiControlPlane/serial/PingHostFromPods 1.26
164 TestMultiControlPlane/serial/AddWorkerNode 53.93
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.98
167 TestMultiControlPlane/serial/CopyFile 14.43
168 TestMultiControlPlane/serial/StopSecondaryNode 91.79
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
170 TestMultiControlPlane/serial/RestartSecondaryNode 37.62
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.04
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 414.57
173 TestMultiControlPlane/serial/DeleteSecondaryNode 19.18
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
175 TestMultiControlPlane/serial/StopCluster 272.91
176 TestMultiControlPlane/serial/RestartCluster 137.26
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
178 TestMultiControlPlane/serial/AddSecondaryNode 79.98
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
183 TestJSONOutput/start/Command 59.87
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.84
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.74
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.38
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 94.88
215 TestMountStart/serial/StartWithMountFirst 33.78
216 TestMountStart/serial/VerifyMountFirst 0.4
217 TestMountStart/serial/StartWithMountSecond 27.45
218 TestMountStart/serial/VerifyMountSecond 0.41
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.41
221 TestMountStart/serial/Stop 1.4
222 TestMountStart/serial/RestartStopped 23.56
223 TestMountStart/serial/VerifyMountPostStop 0.42
226 TestMultiNode/serial/FreshStart2Nodes 114.53
227 TestMultiNode/serial/DeployApp2Nodes 6.39
228 TestMultiNode/serial/PingHostFrom2Pods 0.89
229 TestMultiNode/serial/AddNode 51.1
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.63
232 TestMultiNode/serial/CopyFile 8
233 TestMultiNode/serial/StopNode 3.25
234 TestMultiNode/serial/StartAfterStop 39.62
235 TestMultiNode/serial/RestartKeepsNodes 322.27
236 TestMultiNode/serial/DeleteNode 2.84
237 TestMultiNode/serial/StopMultiNode 182.22
238 TestMultiNode/serial/RestartMultiNode 96.42
239 TestMultiNode/serial/ValidateNameConflict 47.13
250 TestRunningBinaryUpgrade 199.56
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 96.64
257 TestStoppedBinaryUpgrade/Setup 2.25
258 TestStoppedBinaryUpgrade/Upgrade 109.89
259 TestNoKubernetes/serial/StartWithStopK8s 38.22
260 TestNoKubernetes/serial/Start 27.7
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
262 TestNoKubernetes/serial/ProfileList 30.3
263 TestNoKubernetes/serial/Stop 1.47
264 TestNoKubernetes/serial/StartNoArgs 23.82
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
274 TestPause/serial/Start 100.99
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
283 TestNetworkPlugins/group/false 3.25
291 TestStartStop/group/no-preload/serial/FirstStart 93.71
293 TestStartStop/group/embed-certs/serial/FirstStart 84.88
294 TestStartStop/group/no-preload/serial/DeployApp 12.31
295 TestStartStop/group/embed-certs/serial/DeployApp 10.32
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
297 TestStartStop/group/no-preload/serial/Stop 90.86
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
299 TestStartStop/group/embed-certs/serial/Stop 91.05
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.46
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
303 TestStartStop/group/no-preload/serial/SecondStart 64.4
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/embed-certs/serial/SecondStart 67.13
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.45
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.42
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
315 TestStartStop/group/no-preload/serial/Pause 2.88
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
318 TestStartStop/group/newest-cni/serial/FirstStart 51.13
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
320 TestStartStop/group/embed-certs/serial/Pause 3.06
321 TestNetworkPlugins/group/auto/Start 76.77
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
323 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 81.84
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
326 TestStartStop/group/newest-cni/serial/Stop 11.38
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
328 TestStartStop/group/newest-cni/serial/SecondStart 54.74
329 TestStartStop/group/old-k8s-version/serial/Stop 2.34
330 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
332 TestNetworkPlugins/group/auto/KubeletFlags 0.28
333 TestNetworkPlugins/group/auto/NetCatPod 11.39
334 TestNetworkPlugins/group/auto/DNS 0.17
335 TestNetworkPlugins/group/auto/Localhost 0.14
336 TestNetworkPlugins/group/auto/HairPin 0.14
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
338 TestNetworkPlugins/group/kindnet/Start 72.7
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
343 TestStartStop/group/newest-cni/serial/Pause 3
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.71
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.24
346 TestNetworkPlugins/group/calico/Start 108.11
347 TestNetworkPlugins/group/custom-flannel/Start 127.18
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
350 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
351 TestNetworkPlugins/group/kindnet/DNS 0.18
352 TestNetworkPlugins/group/kindnet/Localhost 0.14
353 TestNetworkPlugins/group/kindnet/HairPin 0.15
354 TestNetworkPlugins/group/enable-default-cni/Start 64.34
355 TestNetworkPlugins/group/calico/ControllerPod 6.01
356 TestNetworkPlugins/group/calico/KubeletFlags 0.3
357 TestNetworkPlugins/group/calico/NetCatPod 11.33
358 TestNetworkPlugins/group/calico/DNS 0.16
359 TestNetworkPlugins/group/calico/Localhost 0.17
360 TestNetworkPlugins/group/calico/HairPin 0.15
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.33
363 TestNetworkPlugins/group/custom-flannel/DNS 0.15
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
366 TestNetworkPlugins/group/flannel/Start 83.46
367 TestNetworkPlugins/group/bridge/Start 76.67
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.24
370 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
371 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
372 TestNetworkPlugins/group/enable-default-cni/HairPin 0.41
373 TestNetworkPlugins/group/flannel/ControllerPod 6.01
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
375 TestNetworkPlugins/group/flannel/NetCatPod 10.24
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
377 TestNetworkPlugins/group/bridge/NetCatPod 11.24
378 TestNetworkPlugins/group/flannel/DNS 0.18
379 TestNetworkPlugins/group/flannel/Localhost 0.13
380 TestNetworkPlugins/group/flannel/HairPin 0.14
381 TestNetworkPlugins/group/bridge/DNS 0.15
382 TestNetworkPlugins/group/bridge/Localhost 0.23
383 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (25.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-777401 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-777401 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.746656619s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0630 14:18:02.703015 1557732 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0630 14:18:02.703147 1557732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-777401
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-777401: exit status 85 (68.563128ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:17 UTC |          |
	|         | -p download-only-777401        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:17:37
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:17:37.003075 1557744 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:17:37.003322 1557744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:17:37.003342 1557744 out.go:358] Setting ErrFile to fd 2...
	I0630 14:17:37.003350 1557744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:17:37.003553 1557744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	W0630 14:17:37.003693 1557744 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20991-1550299/.minikube/config/config.json: open /home/jenkins/minikube-integration/20991-1550299/.minikube/config/config.json: no such file or directory
	I0630 14:17:37.004337 1557744 out.go:352] Setting JSON to true
	I0630 14:17:37.005458 1557744 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":28749,"bootTime":1751264308,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:17:37.005590 1557744 start.go:140] virtualization: kvm guest
	I0630 14:17:37.008204 1557744 out.go:97] [download-only-777401] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0630 14:17:37.008401 1557744 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball: no such file or directory
	I0630 14:17:37.008464 1557744 notify.go:220] Checking for updates...
	I0630 14:17:37.009742 1557744 out.go:169] MINIKUBE_LOCATION=20991
	I0630 14:17:37.011099 1557744 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:17:37.012526 1557744 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:17:37.014007 1557744 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:17:37.015738 1557744 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0630 14:17:37.018412 1557744 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0630 14:17:37.018812 1557744 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:17:37.055817 1557744 out.go:97] Using the kvm2 driver based on user configuration
	I0630 14:17:37.055866 1557744 start.go:304] selected driver: kvm2
	I0630 14:17:37.055873 1557744 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:17:37.056263 1557744 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:17:37.056371 1557744 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0630 14:17:37.061479 1557744 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0630 14:17:37.063327 1557744 out.go:97] Downloading driver docker-machine-driver-kvm2:
	I0630 14:17:37.063474 1557744 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:17:39.131567 1557744 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:17:39.132217 1557744 start_flags.go:408] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0630 14:17:39.132428 1557744 start_flags.go:972] Wait components to verify : map[apiserver:true system_pods:true]
	I0630 14:17:39.132482 1557744 cni.go:84] Creating CNI manager for ""
	I0630 14:17:39.132541 1557744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:17:39.132561 1557744 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:17:39.132643 1557744 start.go:347] cluster config:
	{Name:download-only-777401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-777401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:17:39.132872 1557744 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:17:39.135130 1557744 out.go:97] Downloading VM boot image ...
	I0630 14:17:39.135248 1557744 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:17:49.955673 1557744 out.go:97] Starting "download-only-777401" primary control-plane node in "download-only-777401" cluster
	I0630 14:17:49.955720 1557744 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0630 14:17:50.049494 1557744 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0630 14:17:50.049544 1557744 cache.go:56] Caching tarball of preloaded images
	I0630 14:17:50.049752 1557744 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0630 14:17:50.051934 1557744 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0630 14:17:50.051965 1557744 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0630 14:17:50.149672 1557744 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0630 14:18:00.950044 1557744 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0630 14:18:00.950150 1557744 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0630 14:18:01.893057 1557744 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0630 14:18:01.893474 1557744 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/download-only-777401/config.json ...
	I0630 14:18:01.893512 1557744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/download-only-777401/config.json: {Name:mke38e0c78be926209965b23f8c24804bcf87e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:18:01.893708 1557744 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0630 14:18:01.893937 1557744 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-777401 host does not exist
	  To start a cluster, run: "minikube start -p download-only-777401"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-777401
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/json-events (14.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-781147 --force --alsologtostderr --kubernetes-version=v1.33.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-781147 --force --alsologtostderr --kubernetes-version=v1.33.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.349626671s)
--- PASS: TestDownloadOnly/v1.33.2/json-events (14.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/preload-exists
I0630 14:18:17.435788 1557732 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
I0630 14:18:17.435839 1557732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.33.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-781147
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-781147: exit status 85 (70.946012ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:17 UTC |                     |
	|         | -p download-only-777401        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| delete  | -p download-only-777401        | download-only-777401 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC | 30 Jun 25 14:18 UTC |
	| start   | -o=json --download-only        | download-only-781147 | jenkins | v1.36.0 | 30 Jun 25 14:18 UTC |                     |
	|         | -p download-only-781147        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:18:03
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:18:03.132921 1558002 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:18:03.133180 1558002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:03.133190 1558002 out.go:358] Setting ErrFile to fd 2...
	I0630 14:18:03.133194 1558002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:18:03.133386 1558002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:18:03.134041 1558002 out.go:352] Setting JSON to true
	I0630 14:18:03.135023 1558002 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":28775,"bootTime":1751264308,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:18:03.135153 1558002 start.go:140] virtualization: kvm guest
	I0630 14:18:03.137228 1558002 out.go:97] [download-only-781147] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:18:03.137467 1558002 notify.go:220] Checking for updates...
	I0630 14:18:03.138846 1558002 out.go:169] MINIKUBE_LOCATION=20991
	I0630 14:18:03.140543 1558002 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:18:03.141920 1558002 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:18:03.143603 1558002 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:18:03.145264 1558002 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0630 14:18:03.148028 1558002 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0630 14:18:03.148399 1558002 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:18:03.183409 1558002 out.go:97] Using the kvm2 driver based on user configuration
	I0630 14:18:03.183447 1558002 start.go:304] selected driver: kvm2
	I0630 14:18:03.183455 1558002 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:18:03.183782 1558002 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:03.183864 1558002 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1550299/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:18:03.200998 1558002 install.go:137] /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:18:03.201070 1558002 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:18:03.202023 1558002 start_flags.go:408] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0630 14:18:03.202249 1558002 start_flags.go:972] Wait components to verify : map[apiserver:true system_pods:true]
	I0630 14:18:03.202289 1558002 cni.go:84] Creating CNI manager for ""
	I0630 14:18:03.202352 1558002 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0630 14:18:03.202366 1558002 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:18:03.202465 1558002 start.go:347] cluster config:
	{Name:download-only-781147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:download-only-781147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:18:03.202604 1558002 iso.go:125] acquiring lock: {Name:mkca1f6a064e2b51449a4c79998fea909ce647ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:18:03.204381 1558002 out.go:97] Starting "download-only-781147" primary control-plane node in "download-only-781147" cluster
	I0630 14:18:03.204408 1558002 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:03.710640 1558002 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.33.2/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	I0630 14:18:03.710679 1558002 cache.go:56] Caching tarball of preloaded images
	I0630 14:18:03.710889 1558002 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime crio
	I0630 14:18:03.712928 1558002 out.go:97] Downloading Kubernetes v1.33.2 preload ...
	I0630 14:18:03.712956 1558002 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4 ...
	I0630 14:18:03.816412 1558002 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.33.2/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:f4ddeb425578526c9f9d6e5915b23713 -> /home/jenkins/minikube-integration/20991-1550299/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-781147 host does not exist
	  To start a cluster, run: "minikube start -p download-only-781147"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.33.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.33.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-781147
--- PASS: TestDownloadOnly/v1.33.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.69s)

                                                
                                                
=== RUN   TestBinaryMirror
I0630 14:18:18.113723 1557732 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.33.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-095233 --alsologtostderr --binary-mirror http://127.0.0.1:44619 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-095233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-095233
--- PASS: TestBinaryMirror (0.69s)

                                                
                                    
x
+
TestOffline (89.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-206078 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-206078 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.423487325s)
helpers_test.go:175: Cleaning up "offline-crio-206078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-206078
--- PASS: TestOffline (89.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-301682
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-301682: exit status 85 (60.484135ms)

                                                
                                                
-- stdout --
	* Profile "addons-301682" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-301682"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-301682
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-301682: exit status 85 (60.52847ms)

                                                
                                                
-- stdout --
	* Profile "addons-301682" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-301682"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (421.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-301682 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-301682 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m1.366563648s)
--- PASS: TestAddons/Setup (421.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-301682 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-301682 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-301682 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-301682 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a7b88ec8-b589-45fc-8044-8377751c36ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a7b88ec8-b589-45fc-8044-8377751c36ab] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004694085s
addons_test.go:694: (dbg) Run:  kubectl --context addons-301682 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-301682 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-301682 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.824428ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-301682
addons_test.go:332: (dbg) Run:  kubectl --context addons-301682 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mrnh4" [f033c8a2-1ce7-4009-8b24-756b9f31550e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003749477s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 addons disable inspektor-gadget --alsologtostderr -v=1: (5.817636562s)
--- PASS: TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.598462ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-lfbsg" [901d4541-370e-458b-a93d-8538af790281] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003511053s
addons_test.go:463: (dbg) Run:  kubectl --context addons-301682 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (87.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-301682 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-301682 --alsologtostderr -v=1: (1.169803381s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-gzrzp" [53132679-b9e3-4953-af14-f7c2ef7cdb66] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-gzrzp" [53132679-b9e3-4953-af14-f7c2ef7cdb66] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-gzrzp" [53132679-b9e3-4953-af14-f7c2ef7cdb66] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 1m21.003586991s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301682 addons disable headlamp --alsologtostderr -v=1: (5.80394132s)
--- PASS: TestAddons/parallel/Headlamp (87.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6d967984f9-l9lpc" [bcd520ac-b89d-4aa8-80a3-08fcea21e742] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003863346s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-f5f9z" [c0d62a93-b221-4cba-bb90-5d326d5d6375] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00502465s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-301682
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-301682: (1m30.878717283s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-301682
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-301682
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-301682
--- PASS: TestAddons/StoppedEnableDisable (91.20s)

                                                
                                    
x
+
TestCertOptions (59.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-329017 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-329017 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (57.729161557s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-329017 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-329017 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-329017 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-329017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-329017
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-329017: (1.090862542s)
--- PASS: TestCertOptions (59.34s)

                                                
                                    
x
+
TestCertExpiration (311.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-775975 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-775975 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m23.332858686s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-775975 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-775975 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (46.857984186s)
helpers_test.go:175: Cleaning up "cert-expiration-775975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-775975
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-775975: (1.096757383s)
--- PASS: TestCertExpiration (311.29s)

                                                
                                    
x
+
TestForceSystemdFlag (80.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-632862 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-632862 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.563100323s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-632862 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-632862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-632862
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-632862: (1.085013382s)
--- PASS: TestForceSystemdFlag (80.89s)

                                                
                                    
x
+
TestForceSystemdEnv (65.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-185417 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-185417 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.921266436s)
helpers_test.go:175: Cleaning up "force-systemd-env-185417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-185417
--- PASS: TestForceSystemdEnv (65.78s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.61s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0630 15:40:19.139413 1557732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0630 15:40:19.139691 1557732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0630 15:40:19.172105 1557732 install.go:62] docker-machine-driver-kvm2: exit status 1
W0630 15:40:19.172291 1557732 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0630 15:40:19.172489 1557732 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2413071494/001/docker-machine-driver-kvm2
I0630 15:40:19.399957 1557732 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2413071494/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x57df720 0x57df720 0x57df720 0x57df720 0x57df720 0x57df720 0x57df720] Decompressors:map[bz2:0xc00047f610 gz:0xc00047f618 tar:0xc00047f5c0 tar.bz2:0xc00047f5d0 tar.gz:0xc00047f5e0 tar.xz:0xc00047f5f0 tar.zst:0xc00047f600 tbz2:0xc00047f5d0 tgz:0xc00047f5e0 txz:0xc00047f5f0 tzst:0xc00047f600 xz:0xc00047f620 zip:0xc00047f630 zst:0xc00047f628] Getters:map[file:0xc001f61540 http:0xc000d8afa0 https:0xc000d8aff0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0630 15:40:19.400022 1557732 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2413071494/001/docker-machine-driver-kvm2
E0630 15:40:20.917537 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0630 15:40:21.978912 1557732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0630 15:40:21.979065 1557732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0630 15:40:22.018216 1557732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0630 15:40:22.018259 1557732 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0630 15:40:22.018352 1557732 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0630 15:40:22.018391 1557732 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2413071494/002/docker-machine-driver-kvm2
I0630 15:40:22.046889 1557732 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2413071494/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x57df720 0x57df720 0x57df720 0x57df720 0x57df720 0x57df720 0x57df720] Decompressors:map[bz2:0xc00047f610 gz:0xc00047f618 tar:0xc00047f5c0 tar.bz2:0xc00047f5d0 tar.gz:0xc00047f5e0 tar.xz:0xc00047f5f0 tar.zst:0xc00047f600 tbz2:0xc00047f5d0 tgz:0xc00047f5e0 txz:0xc00047f5f0 tzst:0xc00047f600 xz:0xc00047f620 zip:0xc00047f630 zst:0xc00047f628] Getters:map[file:0xc0008845c0 http:0xc0005b0140 https:0xc0005b0190] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0630 15:40:22.046954 1557732 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2413071494/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.61s)

                                                
                                    
x
+
TestErrorSpam/setup (48.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-127539 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-127539 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-127539 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-127539 --driver=kvm2  --container-runtime=crio: (48.230829732s)
--- PASS: TestErrorSpam/setup (48.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 pause
--- PASS: TestErrorSpam/pause (1.87s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (5.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 stop: (2.337613412s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 stop: (1.464277909s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-127539 --log_dir /tmp/nospam-127539 stop: (1.88211604s)
--- PASS: TestErrorSpam/stop (5.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20991-1550299/.minikube/files/etc/test/nested/copy/1557732/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-920930 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-920930 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (53.590963408s)
--- PASS: TestFunctional/serial/StartWithProxy (53.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.55s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0630 14:39:41.147118 1557732 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-920930 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-920930 --alsologtostderr -v=8: (32.548085388s)
functional_test.go:680: soft start took 32.548923559s for "functional-920930" cluster.
I0630 14:40:13.695604 1557732 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
--- PASS: TestFunctional/serial/SoftStart (32.55s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-920930 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 cache add registry.k8s.io/pause:3.1: (1.26158949s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 cache add registry.k8s.io/pause:3.3: (1.290864829s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 cache add registry.k8s.io/pause:latest: (1.273562582s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-920930 /tmp/TestFunctionalserialCacheCmdcacheadd_local2145415071/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 cache add minikube-local-cache-test:functional-920930
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 cache add minikube-local-cache-test:functional-920930: (1.832380423s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 cache delete minikube-local-cache-test:functional-920930
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-920930
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (230.677907ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 cache reload
E0630 14:40:20.917340 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:20.923927 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:20.935497 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:20.957110 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:20.998700 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:21.080282 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:21.241828 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:21.563620 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 cache reload: (1.059034815s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 kubectl -- --context functional-920930 get pods
E0630 14:40:22.206018 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-920930 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-920930 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0630 14:40:23.488257 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:26.051313 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:31.173103 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:40:41.414833 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-920930 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.482299126s)
functional_test.go:778: restart took 33.482513711s for "functional-920930" cluster.
I0630 14:40:55.824860 1557732 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
--- PASS: TestFunctional/serial/ExtraConfig (33.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-920930 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 logs: (1.453993113s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 logs --file /tmp/TestFunctionalserialLogsFileCmd1949744534/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 logs --file /tmp/TestFunctionalserialLogsFileCmd1949744534/001/logs.txt: (1.4855858s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-920930 apply -f testdata/invalidsvc.yaml
E0630 14:41:01.896589 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-920930
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-920930: exit status 115 (328.148276ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.113:30872 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-920930 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-920930 delete -f testdata/invalidsvc.yaml: (1.237870949s)
--- PASS: TestFunctional/serial/InvalidService (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 config get cpus: exit status 14 (67.668147ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 config get cpus: exit status 14 (52.885989ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-920930 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-920930 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (155.683304ms)

                                                
                                                
-- stdout --
	* [functional-920930] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 14:41:51.556990 1572338 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:41:51.557110 1572338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:51.557122 1572338 out.go:358] Setting ErrFile to fd 2...
	I0630 14:41:51.557128 1572338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:51.557339 1572338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:41:51.558012 1572338 out.go:352] Setting JSON to false
	I0630 14:41:51.559305 1572338 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":30204,"bootTime":1751264308,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:41:51.559423 1572338 start.go:140] virtualization: kvm guest
	I0630 14:41:51.561744 1572338 out.go:177] * [functional-920930] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:41:51.563400 1572338 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:41:51.563365 1572338 notify.go:220] Checking for updates...
	I0630 14:41:51.566221 1572338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:41:51.567710 1572338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:41:51.569121 1572338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:41:51.570759 1572338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:41:51.572530 1572338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:41:51.574565 1572338 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:41:51.575037 1572338 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:51.575124 1572338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:51.592074 1572338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I0630 14:41:51.592665 1572338 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:51.593335 1572338 main.go:141] libmachine: Using API Version  1
	I0630 14:41:51.593355 1572338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:51.593827 1572338 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:51.594122 1572338 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:51.594445 1572338 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:41:51.594757 1572338 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:51.594804 1572338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:51.611646 1572338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I0630 14:41:51.612302 1572338 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:51.613001 1572338 main.go:141] libmachine: Using API Version  1
	I0630 14:41:51.613027 1572338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:51.613526 1572338 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:51.613833 1572338 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:51.652574 1572338 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 14:41:51.654498 1572338 start.go:304] selected driver: kvm2
	I0630 14:41:51.654528 1572338 start.go:908] validating driver "kvm2" against &{Name:functional-920930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:functional-920930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:41:51.654682 1572338 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:41:51.657066 1572338 out.go:201] 
	W0630 14:41:51.658367 1572338 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0630 14:41:51.660052 1572338 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-920930 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-920930 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-920930 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.330479ms)

                                                
                                                
-- stdout --
	* [functional-920930] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 14:41:48.622693 1571973 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:41:48.622930 1571973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:48.622940 1571973 out.go:358] Setting ErrFile to fd 2...
	I0630 14:41:48.622944 1571973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:48.623234 1571973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:41:48.623773 1571973 out.go:352] Setting JSON to false
	I0630 14:41:48.624775 1571973 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":30201,"bootTime":1751264308,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:41:48.624885 1571973 start.go:140] virtualization: kvm guest
	I0630 14:41:48.626977 1571973 out.go:177] * [functional-920930] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0630 14:41:48.628660 1571973 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:41:48.628703 1571973 notify.go:220] Checking for updates...
	I0630 14:41:48.631143 1571973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:41:48.632595 1571973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 14:41:48.633840 1571973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 14:41:48.635186 1571973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:41:48.636449 1571973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:41:48.638049 1571973 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:41:48.638461 1571973 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:48.638534 1571973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:48.655005 1571973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34021
	I0630 14:41:48.655589 1571973 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:48.656155 1571973 main.go:141] libmachine: Using API Version  1
	I0630 14:41:48.656179 1571973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:48.656551 1571973 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:48.656734 1571973 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:48.657011 1571973 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:41:48.657333 1571973 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:48.657380 1571973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:48.674490 1571973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0630 14:41:48.674963 1571973 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:48.675491 1571973 main.go:141] libmachine: Using API Version  1
	I0630 14:41:48.675510 1571973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:48.675956 1571973 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:48.676227 1571973 main.go:141] libmachine: (functional-920930) Calling .DriverName
	I0630 14:41:48.711588 1571973 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0630 14:41:48.712943 1571973 start.go:304] selected driver: kvm2
	I0630 14:41:48.712963 1571973 start.go:908] validating driver "kvm2" against &{Name:functional-920930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:functional-920930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:41:48.713077 1571973 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:41:48.715283 1571973 out.go:201] 
	W0630 14:41:48.716493 1571973 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0630 14:41:48.717545 1571973 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (41.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-920930 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-920930 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-2fgsq" [1f0ce418-59cc-4ca6-bd22-780c56a99932] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-2fgsq" [1f0ce418-59cc-4ca6-bd22-780c56a99932] Running
E0630 14:41:42.858848 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 41.004277066s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.113:31021
functional_test.go:1692: http://192.168.39.113:31021: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-2fgsq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.113:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.113:31021
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (41.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh -n functional-920930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 cp functional-920930:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2702329325/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh -n functional-920930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh -n functional-920930 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1557732/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo cat /etc/test/nested/copy/1557732/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1557732.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo cat /etc/ssl/certs/1557732.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1557732.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo cat /usr/share/ca-certificates/1557732.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/15577322.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo cat /etc/ssl/certs/15577322.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/15577322.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo cat /usr/share/ca-certificates/15577322.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-920930 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "sudo systemctl is-active docker": exit status 1 (257.006217ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "sudo systemctl is-active containerd": exit status 1 (235.649157ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-920930 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.33.2
registry.k8s.io/kube-proxy:v1.33.2
registry.k8s.io/kube-controller-manager:v1.33.2
registry.k8s.io/kube-apiserver:v1.33.2
registry.k8s.io/etcd:3.5.21-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.12.0
localhost/minikube-local-cache-test:functional-920930
localhost/kicbase/echo-server:functional-920930
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-920930 image ls --format short --alsologtostderr:
I0630 14:42:02.326969 1572971 out.go:345] Setting OutFile to fd 1 ...
I0630 14:42:02.327092 1572971 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:02.327097 1572971 out.go:358] Setting ErrFile to fd 2...
I0630 14:42:02.327101 1572971 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:02.327290 1572971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
I0630 14:42:02.327910 1572971 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:02.328013 1572971 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:02.328383 1572971 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:02.328427 1572971 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:02.345806 1572971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
I0630 14:42:02.346489 1572971 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:02.347138 1572971 main.go:141] libmachine: Using API Version  1
I0630 14:42:02.347176 1572971 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:02.347655 1572971 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:02.347963 1572971 main.go:141] libmachine: (functional-920930) Calling .GetState
I0630 14:42:02.349959 1572971 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:02.350005 1572971 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:02.366990 1572971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42495
I0630 14:42:02.367598 1572971 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:02.368054 1572971 main.go:141] libmachine: Using API Version  1
I0630 14:42:02.368081 1572971 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:02.368473 1572971 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:02.368926 1572971 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:42:02.369202 1572971 ssh_runner.go:195] Run: systemctl --version
I0630 14:42:02.369239 1572971 main.go:141] libmachine: (functional-920930) Calling .GetSSHHostname
I0630 14:42:02.372226 1572971 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:02.373234 1572971 main.go:141] libmachine: (functional-920930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:bf:47", ip: ""} in network mk-functional-920930: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:03 +0000 UTC Type:0 Mac:52:54:00:41:bf:47 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-920930 Clientid:01:52:54:00:41:bf:47}
I0630 14:42:02.373290 1572971 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined IP address 192.168.39.113 and MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:02.373538 1572971 main.go:141] libmachine: (functional-920930) Calling .GetSSHPort
I0630 14:42:02.373787 1572971 main.go:141] libmachine: (functional-920930) Calling .GetSSHKeyPath
I0630 14:42:02.373987 1572971 main.go:141] libmachine: (functional-920930) Calling .GetSSHUsername
I0630 14:42:02.374333 1572971 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/functional-920930/id_rsa Username:docker}
I0630 14:42:02.451905 1572971 ssh_runner.go:195] Run: sudo crictl images --output json
I0630 14:42:02.490799 1572971 main.go:141] libmachine: Making call to close driver server
I0630 14:42:02.490824 1572971 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:02.491149 1572971 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:02.491177 1572971 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:42:02.491191 1572971 main.go:141] libmachine: Making call to close driver server
I0630 14:42:02.491202 1572971 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:02.491536 1572971 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:02.491553 1572971 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:42:02.491583 1572971 main.go:141] libmachine: (functional-920930) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-920930 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.33.2            | 661d404f36f01 | 99.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/kicbase/echo-server           | functional-920930  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-920930  | 3645c678964ee | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.21-0           | 499038711c081 | 154MB  |
| registry.k8s.io/kube-apiserver          | v1.33.2            | ee794efa53d85 | 103MB  |
| registry.k8s.io/kube-controller-manager | v1.33.2            | ff4f56c76b82d | 95.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.12.0            | 1cf5f116067c6 | 71.2MB |
| registry.k8s.io/kube-scheduler          | v1.33.2            | cfed1ff748928 | 74.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20250512-df8de77b | 409467f978b4a | 109MB  |
| localhost/my-image                      | functional-920930  | 6258f0a9320aa | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-920930 image ls --format table --alsologtostderr:
I0630 14:42:07.006843 1573137 out.go:345] Setting OutFile to fd 1 ...
I0630 14:42:07.007093 1573137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:07.007102 1573137 out.go:358] Setting ErrFile to fd 2...
I0630 14:42:07.007106 1573137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:07.007341 1573137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
I0630 14:42:07.008108 1573137 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:07.008212 1573137 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:07.008583 1573137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:07.008658 1573137 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:07.025703 1573137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
I0630 14:42:07.026264 1573137 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:07.026886 1573137 main.go:141] libmachine: Using API Version  1
I0630 14:42:07.026918 1573137 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:07.027431 1573137 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:07.027663 1573137 main.go:141] libmachine: (functional-920930) Calling .GetState
I0630 14:42:07.029877 1573137 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:07.029931 1573137 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:07.046629 1573137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
I0630 14:42:07.047343 1573137 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:07.047884 1573137 main.go:141] libmachine: Using API Version  1
I0630 14:42:07.047916 1573137 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:07.048355 1573137 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:07.048602 1573137 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:42:07.048816 1573137 ssh_runner.go:195] Run: systemctl --version
I0630 14:42:07.048842 1573137 main.go:141] libmachine: (functional-920930) Calling .GetSSHHostname
I0630 14:42:07.052560 1573137 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:07.052988 1573137 main.go:141] libmachine: (functional-920930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:bf:47", ip: ""} in network mk-functional-920930: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:03 +0000 UTC Type:0 Mac:52:54:00:41:bf:47 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-920930 Clientid:01:52:54:00:41:bf:47}
I0630 14:42:07.053013 1573137 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined IP address 192.168.39.113 and MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:07.053124 1573137 main.go:141] libmachine: (functional-920930) Calling .GetSSHPort
I0630 14:42:07.053373 1573137 main.go:141] libmachine: (functional-920930) Calling .GetSSHKeyPath
I0630 14:42:07.053623 1573137 main.go:141] libmachine: (functional-920930) Calling .GetSSHUsername
I0630 14:42:07.053843 1573137 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/functional-920930/id_rsa Username:docker}
I0630 14:42:07.136331 1573137 ssh_runner.go:195] Run: sudo crictl images --output json
I0630 14:42:07.175901 1573137 main.go:141] libmachine: Making call to close driver server
I0630 14:42:07.175919 1573137 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:07.176225 1573137 main.go:141] libmachine: (functional-920930) DBG | Closing plugin on server side
I0630 14:42:07.176274 1573137 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:07.176304 1573137 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:42:07.176320 1573137 main.go:141] libmachine: Making call to close driver server
I0630 14:42:07.176331 1573137 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:07.176554 1573137 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:07.176569 1573137 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:42:07.176585 1573137 main.go:141] libmachine: (functional-920930) DBG | Closing plugin on server side
E0630 14:43:04.780567 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-920930 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"6258f0a9320aa5de36cfa3c66e9a7b6fd2663f649b6ac0d63eb7a0d9b9fd5201","repoDigests":["localhost/my-image@sha256:5bc5cdc093b5bcae797f83eedf81b41882d6a76b14c3b3029d03f363450e90e5"],"repoTags":["localhost/my-image:functional-920930"],"size":"1468600"},{"
id":"ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ca60874e4be19b02d7698252ed14f556063ab89c28e6aa973893f805f982ee1b","registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137"],"repoTags":["registry.k8s.io/kube-apiserver:v1.33.2"],"size":"102866402"},{"id":"661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19","repoDigests":["registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51","registry.k8s.io/kube-proxy@sha256:ddac50fd605c72319674beb2003bfeb28aab7f512e6b8c437a237fd95e9a3a9b"],"repoTags":["registry.k8s.io/kube-proxy:v1.33.2"],"size":"99154329"},{"id":"cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3","registry.k8s.io/kube-scheduler@sha256:9e90950a65d6ff159ac7faaf8310dba83434f6e45ae53ea7b217abf5ab892
6bc"],"repoTags":["registry.k8s.io/kube-scheduler:v1.33.2"],"size":"74509638"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-920930"],"size":"4943877"},{"id":"3645c678964eee71c68a89cefb7bfe0d76a4a68de950b413749c0bc1308681ad","repoDigests":["localhost/minikube-local-cache-test@sha256:cf3c9ce3ca77a44e67a2609e75182697cbcb9759059d7b9742b4c08b0019d7de"],"repoTags":["localhost/minikube-local-cache-test:functional-920930"],"size":"3330"},{"id":"ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2236e72a4
be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081","registry.k8s.io/kube-controller-manager@sha256:cfb4f44bf687e69bf88efa95df5ad5191707579635d5c977a3b6cec2d4be0730"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.33.2"],"size":"95665480"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"850a3c8ff6a73c90b405da8340ad9e05e52d63ad6aff6580901
7c3bad7b89504","repoDigests":["docker.io/library/c46b84bf6fbdc59e709cfd14d9987050c15d1d45483c534539d4cb2569b0bc01-tmp@sha256:f2164e24fe4521693d2c564d783dfb4dea56e834b4be6e6f38af532e30bd82e2"],"repoTags":[],"size":"1466018"},{"id":"1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b","repoDigests":["registry.k8s.io/coredns/coredns@sha256:2324f485c8db937628a18c293d946327f3a7229b9f77213e8f2256f0b616a4ee","registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.0"],"size":"71169915"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1","repoDigests":["registry.k8s.io/etcd@sha256:21d2177d708b53ac0fbd1c073c334d58f913eb75da293ff086610e61af03630a","registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121"],"repoTags":["registry.k8s.io/etcd:3.5.21-0"],"size":"154190592"}]

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-920930 image ls --format json --alsologtostderr:
I0630 14:42:06.777656 1573113 out.go:345] Setting OutFile to fd 1 ...
I0630 14:42:06.777899 1573113 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:06.777907 1573113 out.go:358] Setting ErrFile to fd 2...
I0630 14:42:06.777911 1573113 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:06.778092 1573113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
I0630 14:42:06.778730 1573113 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:06.778827 1573113 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:06.779184 1573113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:06.779243 1573113 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:06.795036 1573113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
I0630 14:42:06.795683 1573113 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:06.796298 1573113 main.go:141] libmachine: Using API Version  1
I0630 14:42:06.796319 1573113 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:06.796724 1573113 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:06.796970 1573113 main.go:141] libmachine: (functional-920930) Calling .GetState
I0630 14:42:06.799087 1573113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:06.799144 1573113 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:06.816216 1573113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35859
I0630 14:42:06.816722 1573113 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:06.817200 1573113 main.go:141] libmachine: Using API Version  1
I0630 14:42:06.817236 1573113 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:06.817624 1573113 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:06.817882 1573113 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:42:06.818197 1573113 ssh_runner.go:195] Run: systemctl --version
I0630 14:42:06.818233 1573113 main.go:141] libmachine: (functional-920930) Calling .GetSSHHostname
I0630 14:42:06.822344 1573113 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:06.822740 1573113 main.go:141] libmachine: (functional-920930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:bf:47", ip: ""} in network mk-functional-920930: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:03 +0000 UTC Type:0 Mac:52:54:00:41:bf:47 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-920930 Clientid:01:52:54:00:41:bf:47}
I0630 14:42:06.822779 1573113 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined IP address 192.168.39.113 and MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:06.823027 1573113 main.go:141] libmachine: (functional-920930) Calling .GetSSHPort
I0630 14:42:06.823277 1573113 main.go:141] libmachine: (functional-920930) Calling .GetSSHKeyPath
I0630 14:42:06.823485 1573113 main.go:141] libmachine: (functional-920930) Calling .GetSSHUsername
I0630 14:42:06.823752 1573113 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/functional-920930/id_rsa Username:docker}
I0630 14:42:06.904248 1573113 ssh_runner.go:195] Run: sudo crictl images --output json
I0630 14:42:06.946631 1573113 main.go:141] libmachine: Making call to close driver server
I0630 14:42:06.946659 1573113 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:06.947016 1573113 main.go:141] libmachine: (functional-920930) DBG | Closing plugin on server side
I0630 14:42:06.947019 1573113 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:06.947044 1573113 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:42:06.947057 1573113 main.go:141] libmachine: Making call to close driver server
I0630 14:42:06.947088 1573113 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:06.947420 1573113 main.go:141] libmachine: (functional-920930) DBG | Closing plugin on server side
I0630 14:42:06.947502 1573113 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:06.947522 1573113 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-920930 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51
- registry.k8s.io/kube-proxy@sha256:ddac50fd605c72319674beb2003bfeb28aab7f512e6b8c437a237fd95e9a3a9b
repoTags:
- registry.k8s.io/kube-proxy:v1.33.2
size: "99154329"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3645c678964eee71c68a89cefb7bfe0d76a4a68de950b413749c0bc1308681ad
repoDigests:
- localhost/minikube-local-cache-test@sha256:cf3c9ce3ca77a44e67a2609e75182697cbcb9759059d7b9742b4c08b0019d7de
repoTags:
- localhost/minikube-local-cache-test:functional-920930
size: "3330"
- id: cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3
- registry.k8s.io/kube-scheduler@sha256:9e90950a65d6ff159ac7faaf8310dba83434f6e45ae53ea7b217abf5ab8926bc
repoTags:
- registry.k8s.io/kube-scheduler:v1.33.2
size: "74509638"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1
repoDigests:
- registry.k8s.io/etcd@sha256:21d2177d708b53ac0fbd1c073c334d58f913eb75da293ff086610e61af03630a
- registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121
repoTags:
- registry.k8s.io/etcd:3.5.21-0
size: "154190592"
- id: ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ca60874e4be19b02d7698252ed14f556063ab89c28e6aa973893f805f982ee1b
- registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137
repoTags:
- registry.k8s.io/kube-apiserver:v1.33.2
size: "102866402"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-920930
size: "4943877"
- id: 1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:2324f485c8db937628a18c293d946327f3a7229b9f77213e8f2256f0b616a4ee
- registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.0
size: "71169915"
- id: ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081
- registry.k8s.io/kube-controller-manager@sha256:cfb4f44bf687e69bf88efa95df5ad5191707579635d5c977a3b6cec2d4be0730
repoTags:
- registry.k8s.io/kube-controller-manager:v1.33.2
size: "95665480"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-920930 image ls --format yaml --alsologtostderr:
I0630 14:42:02.547347 1572995 out.go:345] Setting OutFile to fd 1 ...
I0630 14:42:02.547622 1572995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:02.547633 1572995 out.go:358] Setting ErrFile to fd 2...
I0630 14:42:02.547639 1572995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:02.547842 1572995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
I0630 14:42:02.548522 1572995 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:02.548655 1572995 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:02.549081 1572995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:02.549163 1572995 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:02.565323 1572995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
I0630 14:42:02.565901 1572995 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:02.566487 1572995 main.go:141] libmachine: Using API Version  1
I0630 14:42:02.566519 1572995 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:02.566972 1572995 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:02.567208 1572995 main.go:141] libmachine: (functional-920930) Calling .GetState
I0630 14:42:02.569352 1572995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:02.569418 1572995 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:02.585397 1572995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
I0630 14:42:02.585846 1572995 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:02.586366 1572995 main.go:141] libmachine: Using API Version  1
I0630 14:42:02.586416 1572995 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:02.586877 1572995 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:02.587109 1572995 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:42:02.587442 1572995 ssh_runner.go:195] Run: systemctl --version
I0630 14:42:02.587475 1572995 main.go:141] libmachine: (functional-920930) Calling .GetSSHHostname
I0630 14:42:02.590333 1572995 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:02.590730 1572995 main.go:141] libmachine: (functional-920930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:bf:47", ip: ""} in network mk-functional-920930: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:03 +0000 UTC Type:0 Mac:52:54:00:41:bf:47 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-920930 Clientid:01:52:54:00:41:bf:47}
I0630 14:42:02.590754 1572995 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined IP address 192.168.39.113 and MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:02.590931 1572995 main.go:141] libmachine: (functional-920930) Calling .GetSSHPort
I0630 14:42:02.591149 1572995 main.go:141] libmachine: (functional-920930) Calling .GetSSHKeyPath
I0630 14:42:02.591413 1572995 main.go:141] libmachine: (functional-920930) Calling .GetSSHUsername
I0630 14:42:02.591570 1572995 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/functional-920930/id_rsa Username:docker}
I0630 14:42:02.671598 1572995 ssh_runner.go:195] Run: sudo crictl images --output json
I0630 14:42:02.711631 1572995 main.go:141] libmachine: Making call to close driver server
I0630 14:42:02.711646 1572995 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:02.711960 1572995 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:02.711998 1572995 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:42:02.712008 1572995 main.go:141] libmachine: Making call to close driver server
I0630 14:42:02.712016 1572995 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:02.712021 1572995 main.go:141] libmachine: (functional-920930) DBG | Closing plugin on server side
I0630 14:42:02.712244 1572995 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:02.712259 1572995 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh pgrep buildkitd: exit status 1 (212.645136ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image build -t localhost/my-image:functional-920930 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 image build -t localhost/my-image:functional-920930 testdata/build --alsologtostderr: (3.563695468s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-920930 image build -t localhost/my-image:functional-920930 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 850a3c8ff6a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-920930
--> 6258f0a9320
Successfully tagged localhost/my-image:functional-920930
6258f0a9320aa5de36cfa3c66e9a7b6fd2663f649b6ac0d63eb7a0d9b9fd5201
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-920930 image build -t localhost/my-image:functional-920930 testdata/build --alsologtostderr:
I0630 14:42:02.984051 1573049 out.go:345] Setting OutFile to fd 1 ...
I0630 14:42:02.984300 1573049 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:02.984310 1573049 out.go:358] Setting ErrFile to fd 2...
I0630 14:42:02.984314 1573049 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:42:02.984491 1573049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
I0630 14:42:02.985086 1573049 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:02.985752 1573049 config.go:182] Loaded profile config "functional-920930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
I0630 14:42:02.986115 1573049 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:02.986162 1573049 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:03.002779 1573049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41233
I0630 14:42:03.003381 1573049 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:03.003965 1573049 main.go:141] libmachine: Using API Version  1
I0630 14:42:03.003992 1573049 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:03.004370 1573049 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:03.004656 1573049 main.go:141] libmachine: (functional-920930) Calling .GetState
I0630 14:42:03.006923 1573049 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
I0630 14:42:03.006977 1573049 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:42:03.023719 1573049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
I0630 14:42:03.024361 1573049 main.go:141] libmachine: () Calling .GetVersion
I0630 14:42:03.024878 1573049 main.go:141] libmachine: Using API Version  1
I0630 14:42:03.024897 1573049 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:42:03.025236 1573049 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:42:03.025468 1573049 main.go:141] libmachine: (functional-920930) Calling .DriverName
I0630 14:42:03.025670 1573049 ssh_runner.go:195] Run: systemctl --version
I0630 14:42:03.025693 1573049 main.go:141] libmachine: (functional-920930) Calling .GetSSHHostname
I0630 14:42:03.029088 1573049 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:03.029708 1573049 main.go:141] libmachine: (functional-920930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:bf:47", ip: ""} in network mk-functional-920930: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:03 +0000 UTC Type:0 Mac:52:54:00:41:bf:47 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-920930 Clientid:01:52:54:00:41:bf:47}
I0630 14:42:03.029736 1573049 main.go:141] libmachine: (functional-920930) DBG | domain functional-920930 has defined IP address 192.168.39.113 and MAC address 52:54:00:41:bf:47 in network mk-functional-920930
I0630 14:42:03.029864 1573049 main.go:141] libmachine: (functional-920930) Calling .GetSSHPort
I0630 14:42:03.030093 1573049 main.go:141] libmachine: (functional-920930) Calling .GetSSHKeyPath
I0630 14:42:03.030251 1573049 main.go:141] libmachine: (functional-920930) Calling .GetSSHUsername
I0630 14:42:03.030428 1573049 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/functional-920930/id_rsa Username:docker}
I0630 14:42:03.111850 1573049 build_images.go:161] Building image from path: /tmp/build.1120741843.tar
I0630 14:42:03.111931 1573049 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0630 14:42:03.125324 1573049 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1120741843.tar
I0630 14:42:03.130203 1573049 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1120741843.tar: stat -c "%s %y" /var/lib/minikube/build/build.1120741843.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1120741843.tar': No such file or directory
I0630 14:42:03.130244 1573049 ssh_runner.go:362] scp /tmp/build.1120741843.tar --> /var/lib/minikube/build/build.1120741843.tar (3072 bytes)
I0630 14:42:03.160424 1573049 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1120741843
I0630 14:42:03.172540 1573049 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1120741843 -xf /var/lib/minikube/build/build.1120741843.tar
I0630 14:42:03.184039 1573049 crio.go:315] Building image: /var/lib/minikube/build/build.1120741843
I0630 14:42:03.184120 1573049 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-920930 /var/lib/minikube/build/build.1120741843 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0630 14:42:06.459623 1573049 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-920930 /var/lib/minikube/build/build.1120741843 --cgroup-manager=cgroupfs: (3.27546391s)
I0630 14:42:06.459710 1573049 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1120741843
I0630 14:42:06.476350 1573049 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1120741843.tar
I0630 14:42:06.488156 1573049 build_images.go:217] Built localhost/my-image:functional-920930 from /tmp/build.1120741843.tar
I0630 14:42:06.488206 1573049 build_images.go:133] succeeded building to: functional-920930
I0630 14:42:06.488213 1573049 build_images.go:134] failed building to: 
I0630 14:42:06.488246 1573049 main.go:141] libmachine: Making call to close driver server
I0630 14:42:06.488263 1573049 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:06.488590 1573049 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:06.488610 1573049 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:42:06.488618 1573049 main.go:141] libmachine: Making call to close driver server
I0630 14:42:06.488627 1573049 main.go:141] libmachine: (functional-920930) Calling .Close
I0630 14:42:06.488678 1573049 main.go:141] libmachine: (functional-920930) DBG | Closing plugin on server side
I0630 14:42:06.488842 1573049 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:42:06.488868 1573049 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:42:06.488891 1573049 main.go:141] libmachine: (functional-920930) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.730865018s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-920930
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (41.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdany-port1399654074/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1751294465558974141" to /tmp/TestFunctionalparallelMountCmdany-port1399654074/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1751294465558974141" to /tmp/TestFunctionalparallelMountCmdany-port1399654074/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1751294465558974141" to /tmp/TestFunctionalparallelMountCmdany-port1399654074/001/test-1751294465558974141
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.836277ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:41:05.803716 1557732 retry.go:31] will retry after 410.894934ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 30 14:41 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 30 14:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 30 14:41 test-1751294465558974141
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh cat /mount-9p/test-1751294465558974141
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-920930 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [671e6fc7-8830-4da1-9cb1-954a8917a998] Pending
helpers_test.go:344: "busybox-mount" [671e6fc7-8830-4da1-9cb1-954a8917a998] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [671e6fc7-8830-4da1-9cb1-954a8917a998] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [671e6fc7-8830-4da1-9cb1-954a8917a998] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 39.00297632s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-920930 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdany-port1399654074/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (41.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image load --daemon kicbase/echo-server:functional-920930 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-920930 image load --daemon kicbase/echo-server:functional-920930 --alsologtostderr: (1.442769863s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image load --daemon kicbase/echo-server:functional-920930 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-920930
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image load --daemon kicbase/echo-server:functional-920930 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image save kicbase/echo-server:functional-920930 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image rm kicbase/echo-server:functional-920930 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-920930
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 image save --daemon kicbase/echo-server:functional-920930 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-920930
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (37.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-920930 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-920930 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-ggs67" [5e400a30-5d07-4bfd-8e18-be55dd8c1b8f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-ggs67" [5e400a30-5d07-4bfd-8e18-be55dd8c1b8f] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 37.003994027s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (37.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "300.336149ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "56.960471ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "294.813407ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "52.844974ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 service list -o json
functional_test.go:1511: Took "464.105045ms" to run "out/minikube-linux-amd64 -p functional-920930 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.113:31240
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.113:31240
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T" /mount1: exit status 1 (263.817755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:42:00.410975 1557732 retry.go:31] will retry after 353.665977ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-920930 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-920930 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-920930 /tmp/TestFunctionalparallelMountCmdVerifyCleanup70459118/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-920930
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-920930
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-920930
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (222.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m41.652072788s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (222.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 kubectl -- rollout status deployment/busybox: (4.779395445s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-67hqg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-9qb4q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-p9wcd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-67hqg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-9qb4q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-p9wcd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-67hqg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-9qb4q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-p9wcd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-67hqg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-67hqg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-9qb4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-9qb4q -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-p9wcd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 kubectl -- exec busybox-58667487b6-p9wcd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 node add --alsologtostderr -v 5
E0630 14:55:20.919566 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 node add --alsologtostderr -v 5: (52.943755118s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-848203 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 status --output json --alsologtostderr -v 5
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp testdata/cp-test.txt ha-848203:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3812693342/001/cp-test_ha-848203.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203:/home/docker/cp-test.txt ha-848203-m02:/home/docker/cp-test_ha-848203_ha-848203-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m02 "sudo cat /home/docker/cp-test_ha-848203_ha-848203-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203:/home/docker/cp-test.txt ha-848203-m03:/home/docker/cp-test_ha-848203_ha-848203-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m03 "sudo cat /home/docker/cp-test_ha-848203_ha-848203-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203:/home/docker/cp-test.txt ha-848203-m04:/home/docker/cp-test_ha-848203_ha-848203-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m04 "sudo cat /home/docker/cp-test_ha-848203_ha-848203-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp testdata/cp-test.txt ha-848203-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3812693342/001/cp-test_ha-848203-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m02:/home/docker/cp-test.txt ha-848203:/home/docker/cp-test_ha-848203-m02_ha-848203.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203 "sudo cat /home/docker/cp-test_ha-848203-m02_ha-848203.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m02:/home/docker/cp-test.txt ha-848203-m03:/home/docker/cp-test_ha-848203-m02_ha-848203-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m03 "sudo cat /home/docker/cp-test_ha-848203-m02_ha-848203-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m02:/home/docker/cp-test.txt ha-848203-m04:/home/docker/cp-test_ha-848203-m02_ha-848203-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m04 "sudo cat /home/docker/cp-test_ha-848203-m02_ha-848203-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp testdata/cp-test.txt ha-848203-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3812693342/001/cp-test_ha-848203-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m03:/home/docker/cp-test.txt ha-848203:/home/docker/cp-test_ha-848203-m03_ha-848203.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203 "sudo cat /home/docker/cp-test_ha-848203-m03_ha-848203.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m03:/home/docker/cp-test.txt ha-848203-m02:/home/docker/cp-test_ha-848203-m03_ha-848203-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m02 "sudo cat /home/docker/cp-test_ha-848203-m03_ha-848203-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m03:/home/docker/cp-test.txt ha-848203-m04:/home/docker/cp-test_ha-848203-m03_ha-848203-m04.txt
E0630 14:56:04.684181 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:56:04.690645 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:56:04.702143 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:56:04.723781 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:56:04.765400 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m03 "sudo cat /home/docker/cp-test.txt"
E0630 14:56:04.847767 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:56:05.009472 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m04 "sudo cat /home/docker/cp-test_ha-848203-m03_ha-848203-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp testdata/cp-test.txt ha-848203-m04:/home/docker/cp-test.txt
E0630 14:56:05.331832 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3812693342/001/cp-test_ha-848203-m04.txt
E0630 14:56:05.974270 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m04:/home/docker/cp-test.txt ha-848203:/home/docker/cp-test_ha-848203-m04_ha-848203.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203 "sudo cat /home/docker/cp-test_ha-848203-m04_ha-848203.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m04:/home/docker/cp-test.txt ha-848203-m02:/home/docker/cp-test_ha-848203-m04_ha-848203-m02.txt
E0630 14:56:07.255761 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m02 "sudo cat /home/docker/cp-test_ha-848203-m04_ha-848203-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 cp ha-848203-m04:/home/docker/cp-test.txt ha-848203-m03:/home/docker/cp-test_ha-848203-m04_ha-848203-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 ssh -n ha-848203-m03 "sudo cat /home/docker/cp-test_ha-848203-m04_ha-848203-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 node stop m02 --alsologtostderr -v 5
E0630 14:56:09.817939 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:56:14.939851 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:56:25.181378 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:56:43.984471 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:56:45.663012 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:57:26.625320 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 node stop m02 --alsologtostderr -v 5: (1m31.074817628s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5: exit status 7 (712.515237ms)

                                                
                                                
-- stdout --
	ha-848203
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-848203-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-848203-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-848203-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 14:57:39.808761 1579957 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:57:39.809038 1579957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:57:39.809049 1579957 out.go:358] Setting ErrFile to fd 2...
	I0630 14:57:39.809053 1579957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:57:39.809285 1579957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 14:57:39.809565 1579957 out.go:352] Setting JSON to false
	I0630 14:57:39.809612 1579957 mustload.go:65] Loading cluster: ha-848203
	I0630 14:57:39.809766 1579957 notify.go:220] Checking for updates...
	I0630 14:57:39.810331 1579957 config.go:182] Loaded profile config "ha-848203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 14:57:39.810364 1579957 status.go:174] checking status of ha-848203 ...
	I0630 14:57:39.810841 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:39.810894 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:39.829059 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32789
	I0630 14:57:39.829710 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:39.830429 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:39.830456 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:39.830954 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:39.831219 1579957 main.go:141] libmachine: (ha-848203) Calling .GetState
	I0630 14:57:39.833381 1579957 status.go:371] ha-848203 host status = "Running" (err=<nil>)
	I0630 14:57:39.833445 1579957 host.go:66] Checking if "ha-848203" exists ...
	I0630 14:57:39.833898 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:39.833958 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:39.851094 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0630 14:57:39.851848 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:39.852334 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:39.852373 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:39.852755 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:39.852974 1579957 main.go:141] libmachine: (ha-848203) Calling .GetIP
	I0630 14:57:39.856058 1579957 main.go:141] libmachine: (ha-848203) DBG | domain ha-848203 has defined MAC address 52:54:00:24:ed:1d in network mk-ha-848203
	I0630 14:57:39.856499 1579957 main.go:141] libmachine: (ha-848203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ed:1d", ip: ""} in network mk-ha-848203: {Iface:virbr1 ExpiryTime:2025-06-30 15:51:24 +0000 UTC Type:0 Mac:52:54:00:24:ed:1d Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-848203 Clientid:01:52:54:00:24:ed:1d}
	I0630 14:57:39.856526 1579957 main.go:141] libmachine: (ha-848203) DBG | domain ha-848203 has defined IP address 192.168.39.240 and MAC address 52:54:00:24:ed:1d in network mk-ha-848203
	I0630 14:57:39.856696 1579957 host.go:66] Checking if "ha-848203" exists ...
	I0630 14:57:39.857015 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:39.857067 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:39.874057 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45191
	I0630 14:57:39.874668 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:39.875180 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:39.875204 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:39.875573 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:39.875850 1579957 main.go:141] libmachine: (ha-848203) Calling .DriverName
	I0630 14:57:39.876037 1579957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 14:57:39.876066 1579957 main.go:141] libmachine: (ha-848203) Calling .GetSSHHostname
	I0630 14:57:39.880077 1579957 main.go:141] libmachine: (ha-848203) DBG | domain ha-848203 has defined MAC address 52:54:00:24:ed:1d in network mk-ha-848203
	I0630 14:57:39.880716 1579957 main.go:141] libmachine: (ha-848203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ed:1d", ip: ""} in network mk-ha-848203: {Iface:virbr1 ExpiryTime:2025-06-30 15:51:24 +0000 UTC Type:0 Mac:52:54:00:24:ed:1d Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-848203 Clientid:01:52:54:00:24:ed:1d}
	I0630 14:57:39.880740 1579957 main.go:141] libmachine: (ha-848203) DBG | domain ha-848203 has defined IP address 192.168.39.240 and MAC address 52:54:00:24:ed:1d in network mk-ha-848203
	I0630 14:57:39.880916 1579957 main.go:141] libmachine: (ha-848203) Calling .GetSSHPort
	I0630 14:57:39.881159 1579957 main.go:141] libmachine: (ha-848203) Calling .GetSSHKeyPath
	I0630 14:57:39.881367 1579957 main.go:141] libmachine: (ha-848203) Calling .GetSSHUsername
	I0630 14:57:39.881540 1579957 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/ha-848203/id_rsa Username:docker}
	I0630 14:57:39.966227 1579957 ssh_runner.go:195] Run: systemctl --version
	I0630 14:57:39.973130 1579957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:57:39.991041 1579957 kubeconfig.go:125] found "ha-848203" server: "https://192.168.39.254:8443"
	I0630 14:57:39.991085 1579957 api_server.go:166] Checking apiserver status ...
	I0630 14:57:39.991129 1579957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:57:40.013082 1579957 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1468/cgroup
	W0630 14:57:40.024459 1579957 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1468/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0630 14:57:40.024543 1579957 ssh_runner.go:195] Run: ls
	I0630 14:57:40.029547 1579957 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0630 14:57:40.034710 1579957 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0630 14:57:40.034739 1579957 status.go:463] ha-848203 apiserver status = Running (err=<nil>)
	I0630 14:57:40.034750 1579957 status.go:176] ha-848203 status: &{Name:ha-848203 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 14:57:40.034766 1579957 status.go:174] checking status of ha-848203-m02 ...
	I0630 14:57:40.035098 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:40.035141 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:40.051353 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46179
	I0630 14:57:40.051831 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:40.052353 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:40.052376 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:40.052787 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:40.053047 1579957 main.go:141] libmachine: (ha-848203-m02) Calling .GetState
	I0630 14:57:40.054943 1579957 status.go:371] ha-848203-m02 host status = "Stopped" (err=<nil>)
	I0630 14:57:40.054965 1579957 status.go:384] host is not running, skipping remaining checks
	I0630 14:57:40.054972 1579957 status.go:176] ha-848203-m02 status: &{Name:ha-848203-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 14:57:40.055015 1579957 status.go:174] checking status of ha-848203-m03 ...
	I0630 14:57:40.055349 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:40.055403 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:40.071401 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0630 14:57:40.071989 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:40.072534 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:40.072552 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:40.072888 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:40.073130 1579957 main.go:141] libmachine: (ha-848203-m03) Calling .GetState
	I0630 14:57:40.075247 1579957 status.go:371] ha-848203-m03 host status = "Running" (err=<nil>)
	I0630 14:57:40.075277 1579957 host.go:66] Checking if "ha-848203-m03" exists ...
	I0630 14:57:40.075586 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:40.075627 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:40.091434 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I0630 14:57:40.092030 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:40.092601 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:40.092621 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:40.093057 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:40.093291 1579957 main.go:141] libmachine: (ha-848203-m03) Calling .GetIP
	I0630 14:57:40.096613 1579957 main.go:141] libmachine: (ha-848203-m03) DBG | domain ha-848203-m03 has defined MAC address 52:54:00:f9:3d:d9 in network mk-ha-848203
	I0630 14:57:40.097192 1579957 main.go:141] libmachine: (ha-848203-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:3d:d9", ip: ""} in network mk-ha-848203: {Iface:virbr1 ExpiryTime:2025-06-30 15:53:38 +0000 UTC Type:0 Mac:52:54:00:f9:3d:d9 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-848203-m03 Clientid:01:52:54:00:f9:3d:d9}
	I0630 14:57:40.097228 1579957 main.go:141] libmachine: (ha-848203-m03) DBG | domain ha-848203-m03 has defined IP address 192.168.39.69 and MAC address 52:54:00:f9:3d:d9 in network mk-ha-848203
	I0630 14:57:40.097400 1579957 host.go:66] Checking if "ha-848203-m03" exists ...
	I0630 14:57:40.097760 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:40.097814 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:40.114045 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0630 14:57:40.114611 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:40.115072 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:40.115091 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:40.115561 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:40.115800 1579957 main.go:141] libmachine: (ha-848203-m03) Calling .DriverName
	I0630 14:57:40.116031 1579957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 14:57:40.116057 1579957 main.go:141] libmachine: (ha-848203-m03) Calling .GetSSHHostname
	I0630 14:57:40.119297 1579957 main.go:141] libmachine: (ha-848203-m03) DBG | domain ha-848203-m03 has defined MAC address 52:54:00:f9:3d:d9 in network mk-ha-848203
	I0630 14:57:40.119845 1579957 main.go:141] libmachine: (ha-848203-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:3d:d9", ip: ""} in network mk-ha-848203: {Iface:virbr1 ExpiryTime:2025-06-30 15:53:38 +0000 UTC Type:0 Mac:52:54:00:f9:3d:d9 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-848203-m03 Clientid:01:52:54:00:f9:3d:d9}
	I0630 14:57:40.119876 1579957 main.go:141] libmachine: (ha-848203-m03) DBG | domain ha-848203-m03 has defined IP address 192.168.39.69 and MAC address 52:54:00:f9:3d:d9 in network mk-ha-848203
	I0630 14:57:40.120073 1579957 main.go:141] libmachine: (ha-848203-m03) Calling .GetSSHPort
	I0630 14:57:40.120288 1579957 main.go:141] libmachine: (ha-848203-m03) Calling .GetSSHKeyPath
	I0630 14:57:40.120467 1579957 main.go:141] libmachine: (ha-848203-m03) Calling .GetSSHUsername
	I0630 14:57:40.120681 1579957 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/ha-848203-m03/id_rsa Username:docker}
	I0630 14:57:40.210493 1579957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:57:40.233761 1579957 kubeconfig.go:125] found "ha-848203" server: "https://192.168.39.254:8443"
	I0630 14:57:40.233794 1579957 api_server.go:166] Checking apiserver status ...
	I0630 14:57:40.233836 1579957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:57:40.255713 1579957 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1898/cgroup
	W0630 14:57:40.268765 1579957 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1898/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0630 14:57:40.268852 1579957 ssh_runner.go:195] Run: ls
	I0630 14:57:40.275001 1579957 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0630 14:57:40.280925 1579957 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0630 14:57:40.280964 1579957 status.go:463] ha-848203-m03 apiserver status = Running (err=<nil>)
	I0630 14:57:40.280991 1579957 status.go:176] ha-848203-m03 status: &{Name:ha-848203-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 14:57:40.281157 1579957 status.go:174] checking status of ha-848203-m04 ...
	I0630 14:57:40.281682 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:40.281763 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:40.299498 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I0630 14:57:40.300032 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:40.300632 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:40.300657 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:40.301061 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:40.301279 1579957 main.go:141] libmachine: (ha-848203-m04) Calling .GetState
	I0630 14:57:40.303186 1579957 status.go:371] ha-848203-m04 host status = "Running" (err=<nil>)
	I0630 14:57:40.303209 1579957 host.go:66] Checking if "ha-848203-m04" exists ...
	I0630 14:57:40.303523 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:40.303567 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:40.320080 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46369
	I0630 14:57:40.320651 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:40.321143 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:40.321163 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:40.321626 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:40.321861 1579957 main.go:141] libmachine: (ha-848203-m04) Calling .GetIP
	I0630 14:57:40.324599 1579957 main.go:141] libmachine: (ha-848203-m04) DBG | domain ha-848203-m04 has defined MAC address 52:54:00:f0:fd:a2 in network mk-ha-848203
	I0630 14:57:40.325100 1579957 main.go:141] libmachine: (ha-848203-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:a2", ip: ""} in network mk-ha-848203: {Iface:virbr1 ExpiryTime:2025-06-30 15:55:16 +0000 UTC Type:0 Mac:52:54:00:f0:fd:a2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-848203-m04 Clientid:01:52:54:00:f0:fd:a2}
	I0630 14:57:40.325156 1579957 main.go:141] libmachine: (ha-848203-m04) DBG | domain ha-848203-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:f0:fd:a2 in network mk-ha-848203
	I0630 14:57:40.325380 1579957 host.go:66] Checking if "ha-848203-m04" exists ...
	I0630 14:57:40.325775 1579957 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:57:40.325841 1579957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:57:40.342934 1579957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
	I0630 14:57:40.343488 1579957 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:57:40.344003 1579957 main.go:141] libmachine: Using API Version  1
	I0630 14:57:40.344024 1579957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:57:40.344399 1579957 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:57:40.344606 1579957 main.go:141] libmachine: (ha-848203-m04) Calling .DriverName
	I0630 14:57:40.344879 1579957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 14:57:40.344900 1579957 main.go:141] libmachine: (ha-848203-m04) Calling .GetSSHHostname
	I0630 14:57:40.348414 1579957 main.go:141] libmachine: (ha-848203-m04) DBG | domain ha-848203-m04 has defined MAC address 52:54:00:f0:fd:a2 in network mk-ha-848203
	I0630 14:57:40.349011 1579957 main.go:141] libmachine: (ha-848203-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:fd:a2", ip: ""} in network mk-ha-848203: {Iface:virbr1 ExpiryTime:2025-06-30 15:55:16 +0000 UTC Type:0 Mac:52:54:00:f0:fd:a2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-848203-m04 Clientid:01:52:54:00:f0:fd:a2}
	I0630 14:57:40.349038 1579957 main.go:141] libmachine: (ha-848203-m04) DBG | domain ha-848203-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:f0:fd:a2 in network mk-ha-848203
	I0630 14:57:40.349311 1579957 main.go:141] libmachine: (ha-848203-m04) Calling .GetSSHPort
	I0630 14:57:40.349680 1579957 main.go:141] libmachine: (ha-848203-m04) Calling .GetSSHKeyPath
	I0630 14:57:40.349905 1579957 main.go:141] libmachine: (ha-848203-m04) Calling .GetSSHUsername
	I0630 14:57:40.350108 1579957 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/ha-848203-m04/id_rsa Username:docker}
	I0630 14:57:40.442553 1579957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:57:40.463289 1579957 status.go:176] ha-848203-m04 status: &{Name:ha-848203-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 node start m02 --alsologtostderr -v 5: (36.436994039s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5: (1.10444805s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.037294812s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (414.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 stop --alsologtostderr -v 5
E0630 14:58:48.547663 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:00:20.916807 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:01:04.683873 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:01:32.389398 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 stop --alsologtostderr -v 5: (4m35.151707712s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 start --wait true --alsologtostderr -v 5: (2m19.264270807s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (414.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 node delete m03 --alsologtostderr -v 5
E0630 15:05:20.917881 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 node delete m03 --alsologtostderr -v 5: (18.368374627s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 stop --alsologtostderr -v 5
E0630 15:06:04.684726 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 stop --alsologtostderr -v 5: (4m32.784365698s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5: exit status 7 (121.800502ms)

                                                
                                                
-- stdout --
	ha-848203
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-848203-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-848203-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:10:07.107090 1584139 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:10:07.107386 1584139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:10:07.107397 1584139 out.go:358] Setting ErrFile to fd 2...
	I0630 15:10:07.107401 1584139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:10:07.107578 1584139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:10:07.107808 1584139 out.go:352] Setting JSON to false
	I0630 15:10:07.107849 1584139 mustload.go:65] Loading cluster: ha-848203
	I0630 15:10:07.107957 1584139 notify.go:220] Checking for updates...
	I0630 15:10:07.108444 1584139 config.go:182] Loaded profile config "ha-848203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:10:07.108479 1584139 status.go:174] checking status of ha-848203 ...
	I0630 15:10:07.109008 1584139 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:10:07.109072 1584139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:10:07.128150 1584139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I0630 15:10:07.128678 1584139 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:10:07.129205 1584139 main.go:141] libmachine: Using API Version  1
	I0630 15:10:07.129231 1584139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:10:07.129657 1584139 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:10:07.129871 1584139 main.go:141] libmachine: (ha-848203) Calling .GetState
	I0630 15:10:07.131495 1584139 status.go:371] ha-848203 host status = "Stopped" (err=<nil>)
	I0630 15:10:07.131517 1584139 status.go:384] host is not running, skipping remaining checks
	I0630 15:10:07.131525 1584139 status.go:176] ha-848203 status: &{Name:ha-848203 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 15:10:07.131570 1584139 status.go:174] checking status of ha-848203-m02 ...
	I0630 15:10:07.131860 1584139 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:10:07.131894 1584139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:10:07.148981 1584139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I0630 15:10:07.149486 1584139 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:10:07.149980 1584139 main.go:141] libmachine: Using API Version  1
	I0630 15:10:07.150010 1584139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:10:07.150471 1584139 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:10:07.150777 1584139 main.go:141] libmachine: (ha-848203-m02) Calling .GetState
	I0630 15:10:07.152464 1584139 status.go:371] ha-848203-m02 host status = "Stopped" (err=<nil>)
	I0630 15:10:07.152483 1584139 status.go:384] host is not running, skipping remaining checks
	I0630 15:10:07.152491 1584139 status.go:176] ha-848203-m02 status: &{Name:ha-848203-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 15:10:07.152512 1584139 status.go:174] checking status of ha-848203-m04 ...
	I0630 15:10:07.152794 1584139 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:10:07.152832 1584139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:10:07.169147 1584139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0630 15:10:07.169720 1584139 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:10:07.170316 1584139 main.go:141] libmachine: Using API Version  1
	I0630 15:10:07.170347 1584139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:10:07.170754 1584139 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:10:07.170966 1584139 main.go:141] libmachine: (ha-848203-m04) Calling .GetState
	I0630 15:10:07.173600 1584139 status.go:371] ha-848203-m04 host status = "Stopped" (err=<nil>)
	I0630 15:10:07.173621 1584139 status.go:384] host is not running, skipping remaining checks
	I0630 15:10:07.173627 1584139 status.go:176] ha-848203-m04 status: &{Name:ha-848203-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (137.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0630 15:10:20.916971 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:11:04.686628 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (2m16.417758s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (137.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 node add --control-plane --alsologtostderr -v 5
E0630 15:12:27.751735 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:13:23.986098 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-848203 node add --control-plane --alsologtostderr -v 5: (1m19.025939824s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-848203 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-215566 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-215566 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (59.869636402s)
--- PASS: TestJSONOutput/start/Command (59.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-215566 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-215566 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-215566 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-215566 --output=json --user=testUser: (7.382938269s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-546343 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-546343 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.500173ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ebe68b9-11b9-4669-9999-e063db9199b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-546343] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"53a4c7dc-1e0c-4257-9fd6-0981194ca55c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20991"}}
	{"specversion":"1.0","id":"9c0811b3-1675-42e5-93f1-8d70ffd11cf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"062b3bc9-7787-4c7b-b699-b08a5f16db49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig"}}
	{"specversion":"1.0","id":"7dd99404-2b53-402d-a96a-c0f39852c320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube"}}
	{"specversion":"1.0","id":"43eedc0e-1fc7-449c-9497-343b0ef6edb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"58c76271-7ba0-4681-8661-b0b0b79c7986","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fdcae730-1830-45e5-b898-a31403e90afe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-546343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-546343
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (94.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-410871 --driver=kvm2  --container-runtime=crio
E0630 15:15:20.917186 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-410871 --driver=kvm2  --container-runtime=crio: (45.326947898s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-425570 --driver=kvm2  --container-runtime=crio
E0630 15:16:04.687145 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-425570 --driver=kvm2  --container-runtime=crio: (46.247818072s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-410871
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-425570
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-425570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-425570
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-425570: (1.035566518s)
helpers_test.go:175: Cleaning up "first-410871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-410871
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-410871: (1.115030115s)
--- PASS: TestMinikubeProfile (94.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (33.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-400196 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-400196 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (32.778414824s)
--- PASS: TestMountStart/serial/StartWithMountFirst (33.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-400196 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-400196 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-418479 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-418479 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.451759152s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418479 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418479 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-400196 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418479 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418479 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-418479
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-418479: (1.397898119s)
--- PASS: TestMountStart/serial/Stop (1.40s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.56s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-418479
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-418479: (22.556983675s)
--- PASS: TestMountStart/serial/RestartStopped (23.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418479 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418479 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973445 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-973445 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.039806202s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-973445 -- rollout status deployment/busybox: (4.696404856s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-96p6v -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-t5lll -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-96p6v -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-t5lll -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-96p6v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-t5lll -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-96p6v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-96p6v -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-t5lll -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-973445 -- exec busybox-58667487b6-t5lll -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-973445 -v=5 --alsologtostderr
E0630 15:20:20.918582 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-973445 -v=5 --alsologtostderr: (50.454216033s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-973445 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp testdata/cp-test.txt multinode-973445:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp multinode-973445:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2368838917/001/cp-test_multinode-973445.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp multinode-973445:/home/docker/cp-test.txt multinode-973445-m02:/home/docker/cp-test_multinode-973445_multinode-973445-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m02 "sudo cat /home/docker/cp-test_multinode-973445_multinode-973445-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp multinode-973445:/home/docker/cp-test.txt multinode-973445-m03:/home/docker/cp-test_multinode-973445_multinode-973445-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m03 "sudo cat /home/docker/cp-test_multinode-973445_multinode-973445-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp testdata/cp-test.txt multinode-973445-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp multinode-973445-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2368838917/001/cp-test_multinode-973445-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp multinode-973445-m02:/home/docker/cp-test.txt multinode-973445:/home/docker/cp-test_multinode-973445-m02_multinode-973445.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445 "sudo cat /home/docker/cp-test_multinode-973445-m02_multinode-973445.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp multinode-973445-m02:/home/docker/cp-test.txt multinode-973445-m03:/home/docker/cp-test_multinode-973445-m02_multinode-973445-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m03 "sudo cat /home/docker/cp-test_multinode-973445-m02_multinode-973445-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp testdata/cp-test.txt multinode-973445-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp multinode-973445-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2368838917/001/cp-test_multinode-973445-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp multinode-973445-m03:/home/docker/cp-test.txt multinode-973445:/home/docker/cp-test_multinode-973445-m03_multinode-973445.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445 "sudo cat /home/docker/cp-test_multinode-973445-m03_multinode-973445.txt"
E0630 15:21:04.683979 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 cp multinode-973445-m03:/home/docker/cp-test.txt multinode-973445-m02:/home/docker/cp-test_multinode-973445-m03_multinode-973445-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 ssh -n multinode-973445-m02 "sudo cat /home/docker/cp-test_multinode-973445-m03_multinode-973445-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-973445 node stop m03: (2.308312544s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-973445 status: exit status 7 (479.846721ms)

                                                
                                                
-- stdout --
	multinode-973445
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-973445-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-973445-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-973445 status --alsologtostderr: exit status 7 (463.159388ms)

                                                
                                                
-- stdout --
	multinode-973445
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-973445-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-973445-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:21:08.376952 1592630 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:21:08.377237 1592630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:21:08.377247 1592630 out.go:358] Setting ErrFile to fd 2...
	I0630 15:21:08.377252 1592630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:21:08.377512 1592630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:21:08.377692 1592630 out.go:352] Setting JSON to false
	I0630 15:21:08.377725 1592630 mustload.go:65] Loading cluster: multinode-973445
	I0630 15:21:08.377868 1592630 notify.go:220] Checking for updates...
	I0630 15:21:08.378259 1592630 config.go:182] Loaded profile config "multinode-973445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:21:08.378284 1592630 status.go:174] checking status of multinode-973445 ...
	I0630 15:21:08.378774 1592630 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:21:08.378827 1592630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:21:08.398772 1592630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0630 15:21:08.399335 1592630 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:21:08.399957 1592630 main.go:141] libmachine: Using API Version  1
	I0630 15:21:08.399986 1592630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:21:08.400392 1592630 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:21:08.400645 1592630 main.go:141] libmachine: (multinode-973445) Calling .GetState
	I0630 15:21:08.403374 1592630 status.go:371] multinode-973445 host status = "Running" (err=<nil>)
	I0630 15:21:08.403400 1592630 host.go:66] Checking if "multinode-973445" exists ...
	I0630 15:21:08.403729 1592630 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:21:08.403777 1592630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:21:08.420992 1592630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35963
	I0630 15:21:08.421515 1592630 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:21:08.421951 1592630 main.go:141] libmachine: Using API Version  1
	I0630 15:21:08.421974 1592630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:21:08.422331 1592630 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:21:08.422547 1592630 main.go:141] libmachine: (multinode-973445) Calling .GetIP
	I0630 15:21:08.425510 1592630 main.go:141] libmachine: (multinode-973445) DBG | domain multinode-973445 has defined MAC address 52:54:00:ba:0b:5c in network mk-multinode-973445
	I0630 15:21:08.425965 1592630 main.go:141] libmachine: (multinode-973445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:0b:5c", ip: ""} in network mk-multinode-973445: {Iface:virbr1 ExpiryTime:2025-06-30 16:18:19 +0000 UTC Type:0 Mac:52:54:00:ba:0b:5c Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-973445 Clientid:01:52:54:00:ba:0b:5c}
	I0630 15:21:08.425993 1592630 main.go:141] libmachine: (multinode-973445) DBG | domain multinode-973445 has defined IP address 192.168.39.215 and MAC address 52:54:00:ba:0b:5c in network mk-multinode-973445
	I0630 15:21:08.426172 1592630 host.go:66] Checking if "multinode-973445" exists ...
	I0630 15:21:08.426507 1592630 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:21:08.426589 1592630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:21:08.443732 1592630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44641
	I0630 15:21:08.444249 1592630 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:21:08.444925 1592630 main.go:141] libmachine: Using API Version  1
	I0630 15:21:08.444947 1592630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:21:08.445391 1592630 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:21:08.445702 1592630 main.go:141] libmachine: (multinode-973445) Calling .DriverName
	I0630 15:21:08.445938 1592630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 15:21:08.445984 1592630 main.go:141] libmachine: (multinode-973445) Calling .GetSSHHostname
	I0630 15:21:08.450506 1592630 main.go:141] libmachine: (multinode-973445) DBG | domain multinode-973445 has defined MAC address 52:54:00:ba:0b:5c in network mk-multinode-973445
	I0630 15:21:08.451111 1592630 main.go:141] libmachine: (multinode-973445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:0b:5c", ip: ""} in network mk-multinode-973445: {Iface:virbr1 ExpiryTime:2025-06-30 16:18:19 +0000 UTC Type:0 Mac:52:54:00:ba:0b:5c Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-973445 Clientid:01:52:54:00:ba:0b:5c}
	I0630 15:21:08.451147 1592630 main.go:141] libmachine: (multinode-973445) DBG | domain multinode-973445 has defined IP address 192.168.39.215 and MAC address 52:54:00:ba:0b:5c in network mk-multinode-973445
	I0630 15:21:08.451324 1592630 main.go:141] libmachine: (multinode-973445) Calling .GetSSHPort
	I0630 15:21:08.451538 1592630 main.go:141] libmachine: (multinode-973445) Calling .GetSSHKeyPath
	I0630 15:21:08.451718 1592630 main.go:141] libmachine: (multinode-973445) Calling .GetSSHUsername
	I0630 15:21:08.451874 1592630 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/multinode-973445/id_rsa Username:docker}
	I0630 15:21:08.528874 1592630 ssh_runner.go:195] Run: systemctl --version
	I0630 15:21:08.534789 1592630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:21:08.551801 1592630 kubeconfig.go:125] found "multinode-973445" server: "https://192.168.39.215:8443"
	I0630 15:21:08.551850 1592630 api_server.go:166] Checking apiserver status ...
	I0630 15:21:08.551885 1592630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:21:08.571392 1592630 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0630 15:21:08.582699 1592630 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:21:08.582773 1592630 ssh_runner.go:195] Run: ls
	I0630 15:21:08.587596 1592630 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0630 15:21:08.592712 1592630 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0630 15:21:08.592739 1592630 status.go:463] multinode-973445 apiserver status = Running (err=<nil>)
	I0630 15:21:08.592750 1592630 status.go:176] multinode-973445 status: &{Name:multinode-973445 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 15:21:08.592767 1592630 status.go:174] checking status of multinode-973445-m02 ...
	I0630 15:21:08.593117 1592630 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:21:08.593165 1592630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:21:08.609450 1592630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35095
	I0630 15:21:08.609893 1592630 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:21:08.610409 1592630 main.go:141] libmachine: Using API Version  1
	I0630 15:21:08.610432 1592630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:21:08.610796 1592630 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:21:08.611034 1592630 main.go:141] libmachine: (multinode-973445-m02) Calling .GetState
	I0630 15:21:08.612884 1592630 status.go:371] multinode-973445-m02 host status = "Running" (err=<nil>)
	I0630 15:21:08.612907 1592630 host.go:66] Checking if "multinode-973445-m02" exists ...
	I0630 15:21:08.613210 1592630 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:21:08.613254 1592630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:21:08.630380 1592630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I0630 15:21:08.630928 1592630 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:21:08.631441 1592630 main.go:141] libmachine: Using API Version  1
	I0630 15:21:08.631489 1592630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:21:08.631908 1592630 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:21:08.632171 1592630 main.go:141] libmachine: (multinode-973445-m02) Calling .GetIP
	I0630 15:21:08.635889 1592630 main.go:141] libmachine: (multinode-973445-m02) DBG | domain multinode-973445-m02 has defined MAC address 52:54:00:c0:f9:2b in network mk-multinode-973445
	I0630 15:21:08.636413 1592630 main.go:141] libmachine: (multinode-973445-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:f9:2b", ip: ""} in network mk-multinode-973445: {Iface:virbr1 ExpiryTime:2025-06-30 16:19:20 +0000 UTC Type:0 Mac:52:54:00:c0:f9:2b Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-973445-m02 Clientid:01:52:54:00:c0:f9:2b}
	I0630 15:21:08.636445 1592630 main.go:141] libmachine: (multinode-973445-m02) DBG | domain multinode-973445-m02 has defined IP address 192.168.39.194 and MAC address 52:54:00:c0:f9:2b in network mk-multinode-973445
	I0630 15:21:08.636611 1592630 host.go:66] Checking if "multinode-973445-m02" exists ...
	I0630 15:21:08.636928 1592630 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:21:08.636968 1592630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:21:08.653523 1592630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0630 15:21:08.654092 1592630 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:21:08.654667 1592630 main.go:141] libmachine: Using API Version  1
	I0630 15:21:08.654689 1592630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:21:08.655128 1592630 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:21:08.655409 1592630 main.go:141] libmachine: (multinode-973445-m02) Calling .DriverName
	I0630 15:21:08.655656 1592630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 15:21:08.655687 1592630 main.go:141] libmachine: (multinode-973445-m02) Calling .GetSSHHostname
	I0630 15:21:08.659056 1592630 main.go:141] libmachine: (multinode-973445-m02) DBG | domain multinode-973445-m02 has defined MAC address 52:54:00:c0:f9:2b in network mk-multinode-973445
	I0630 15:21:08.659590 1592630 main.go:141] libmachine: (multinode-973445-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:f9:2b", ip: ""} in network mk-multinode-973445: {Iface:virbr1 ExpiryTime:2025-06-30 16:19:20 +0000 UTC Type:0 Mac:52:54:00:c0:f9:2b Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-973445-m02 Clientid:01:52:54:00:c0:f9:2b}
	I0630 15:21:08.659622 1592630 main.go:141] libmachine: (multinode-973445-m02) DBG | domain multinode-973445-m02 has defined IP address 192.168.39.194 and MAC address 52:54:00:c0:f9:2b in network mk-multinode-973445
	I0630 15:21:08.659797 1592630 main.go:141] libmachine: (multinode-973445-m02) Calling .GetSSHPort
	I0630 15:21:08.659946 1592630 main.go:141] libmachine: (multinode-973445-m02) Calling .GetSSHKeyPath
	I0630 15:21:08.660148 1592630 main.go:141] libmachine: (multinode-973445-m02) Calling .GetSSHUsername
	I0630 15:21:08.660364 1592630 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1550299/.minikube/machines/multinode-973445-m02/id_rsa Username:docker}
	I0630 15:21:08.749216 1592630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:21:08.764831 1592630 status.go:176] multinode-973445-m02 status: &{Name:multinode-973445-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0630 15:21:08.764878 1592630 status.go:174] checking status of multinode-973445-m03 ...
	I0630 15:21:08.765226 1592630 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:21:08.765283 1592630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:21:08.782502 1592630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0630 15:21:08.783141 1592630 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:21:08.783648 1592630 main.go:141] libmachine: Using API Version  1
	I0630 15:21:08.783670 1592630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:21:08.783992 1592630 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:21:08.784201 1592630 main.go:141] libmachine: (multinode-973445-m03) Calling .GetState
	I0630 15:21:08.786184 1592630 status.go:371] multinode-973445-m03 host status = "Stopped" (err=<nil>)
	I0630 15:21:08.786205 1592630 status.go:384] host is not running, skipping remaining checks
	I0630 15:21:08.786213 1592630 status.go:176] multinode-973445-m03 status: &{Name:multinode-973445-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-973445 node start m03 -v=5 --alsologtostderr: (38.932486874s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (322.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-973445
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-973445
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-973445: (3m4.240349612s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973445 --wait=true -v=5 --alsologtostderr
E0630 15:25:20.916725 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:26:04.684546 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-973445 --wait=true -v=5 --alsologtostderr: (2m17.9184417s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-973445
--- PASS: TestMultiNode/serial/RestartKeepsNodes (322.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-973445 node delete m03: (2.250087464s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 stop
E0630 15:29:07.756380 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:30:03.990266 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-973445 stop: (3m2.016044352s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-973445 status: exit status 7 (101.807982ms)

                                                
                                                
-- stdout --
	multinode-973445
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-973445-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-973445 status --alsologtostderr: exit status 7 (103.26169ms)

                                                
                                                
-- stdout --
	multinode-973445
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-973445-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:30:15.690566 1595496 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:30:15.690876 1595496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:30:15.690887 1595496 out.go:358] Setting ErrFile to fd 2...
	I0630 15:30:15.690893 1595496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:30:15.691107 1595496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:30:15.691315 1595496 out.go:352] Setting JSON to false
	I0630 15:30:15.691374 1595496 mustload.go:65] Loading cluster: multinode-973445
	I0630 15:30:15.691435 1595496 notify.go:220] Checking for updates...
	I0630 15:30:15.691817 1595496 config.go:182] Loaded profile config "multinode-973445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:30:15.691844 1595496 status.go:174] checking status of multinode-973445 ...
	I0630 15:30:15.692310 1595496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:30:15.692367 1595496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:30:15.710103 1595496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44901
	I0630 15:30:15.710818 1595496 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:30:15.711519 1595496 main.go:141] libmachine: Using API Version  1
	I0630 15:30:15.711548 1595496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:30:15.712001 1595496 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:30:15.712253 1595496 main.go:141] libmachine: (multinode-973445) Calling .GetState
	I0630 15:30:15.714428 1595496 status.go:371] multinode-973445 host status = "Stopped" (err=<nil>)
	I0630 15:30:15.714451 1595496 status.go:384] host is not running, skipping remaining checks
	I0630 15:30:15.714459 1595496 status.go:176] multinode-973445 status: &{Name:multinode-973445 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 15:30:15.714478 1595496 status.go:174] checking status of multinode-973445-m02 ...
	I0630 15:30:15.714803 1595496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1550299/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:30:15.714850 1595496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:30:15.732842 1595496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0630 15:30:15.733366 1595496 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:30:15.733938 1595496 main.go:141] libmachine: Using API Version  1
	I0630 15:30:15.733966 1595496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:30:15.734417 1595496 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:30:15.734682 1595496 main.go:141] libmachine: (multinode-973445-m02) Calling .GetState
	I0630 15:30:15.736956 1595496 status.go:371] multinode-973445-m02 host status = "Stopped" (err=<nil>)
	I0630 15:30:15.736978 1595496 status.go:384] host is not running, skipping remaining checks
	I0630 15:30:15.736984 1595496 status.go:176] multinode-973445-m02 status: &{Name:multinode-973445-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (96.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973445 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0630 15:30:20.916671 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:31:04.683983 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-973445 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m35.841816743s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-973445 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (96.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-973445
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973445-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-973445-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (81.574666ms)

                                                
                                                
-- stdout --
	* [multinode-973445-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-973445-m02' is duplicated with machine name 'multinode-973445-m02' in profile 'multinode-973445'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-973445-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-973445-m03 --driver=kvm2  --container-runtime=crio: (45.911666858s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-973445
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-973445: exit status 80 (238.850037ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-973445 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-973445-m03 already exists in multinode-973445-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-973445-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (199.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.957779888 start -p running-upgrade-581591 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.957779888 start -p running-upgrade-581591 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (2m6.450782089s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-581591 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-581591 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.475609262s)
helpers_test.go:175: Cleaning up "running-upgrade-581591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-581591
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-581591: (1.198069969s)
--- PASS: TestRunningBinaryUpgrade (199.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-446957 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-446957 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (85.563627ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-446957] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-446957 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-446957 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m36.373503945s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-446957 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2692922522 start -p stopped-upgrade-858807 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2692922522 start -p stopped-upgrade-858807 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m4.033943276s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2692922522 -p stopped-upgrade-858807 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2692922522 -p stopped-upgrade-858807 stop: (2.154335809s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-858807 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-858807 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.69608155s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-446957 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-446957 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (37.0143303s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-446957 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-446957 status -o json: exit status 2 (291.22194ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-446957","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-446957
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-446957 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-446957 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (27.703724408s)
--- PASS: TestNoKubernetes/serial/Start (27.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-446957 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-446957 "sudo systemctl is-active --quiet service kubelet": exit status 1 (250.83272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.772906592s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.525731683s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-446957
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-446957: (1.474098436s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-446957 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-446957 --driver=kvm2  --container-runtime=crio: (23.816586956s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-858807
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestPause/serial/Start (100.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-011818 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-011818 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m40.98801449s)
--- PASS: TestPause/serial/Start (100.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-446957 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-446957 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.093026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-668101 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-668101 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.439534ms)

                                                
                                                
-- stdout --
	* [false-668101] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:40:12.032925 1602945 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:40:12.033237 1602945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:40:12.033248 1602945 out.go:358] Setting ErrFile to fd 2...
	I0630 15:40:12.033252 1602945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:40:12.033519 1602945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1550299/.minikube/bin
	I0630 15:40:12.034277 1602945 out.go:352] Setting JSON to false
	I0630 15:40:12.035319 1602945 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":33704,"bootTime":1751264308,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:40:12.035459 1602945 start.go:140] virtualization: kvm guest
	I0630 15:40:12.037539 1602945 out.go:177] * [false-668101] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:40:12.038769 1602945 notify.go:220] Checking for updates...
	I0630 15:40:12.038845 1602945 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:40:12.040225 1602945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:40:12.041710 1602945 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1550299/kubeconfig
	I0630 15:40:12.043113 1602945 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1550299/.minikube
	I0630 15:40:12.044557 1602945 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:40:12.046040 1602945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:40:12.047836 1602945 config.go:182] Loaded profile config "force-systemd-env-185417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:40:12.047973 1602945 config.go:182] Loaded profile config "kubernetes-upgrade-691468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0630 15:40:12.048107 1602945 config.go:182] Loaded profile config "pause-011818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
	I0630 15:40:12.048220 1602945 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:40:12.086954 1602945 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 15:40:12.088157 1602945 start.go:304] selected driver: kvm2
	I0630 15:40:12.088175 1602945 start.go:908] validating driver "kvm2" against <nil>
	I0630 15:40:12.088188 1602945 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:40:12.090449 1602945 out.go:201] 
	W0630 15:40:12.091744 1602945 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0630 15:40:12.092930 1602945 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-668101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-668101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-668101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-668101"

                                                
                                                
----------------------- debugLogs end: false-668101 [took: 2.984494839s] --------------------------------
helpers_test.go:175: Cleaning up "false-668101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-668101
--- PASS: TestNetworkPlugins/group/false (3.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (93.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-733305 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-733305 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2: (1m33.713590224s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (93.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-662970 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-662970 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2: (1m24.880836062s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-733305 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ca167626-2a16-4b2e-86b9-c0e14897e487] Pending
helpers_test.go:344: "busybox" [ca167626-2a16-4b2e-86b9-c0e14897e487] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ca167626-2a16-4b2e-86b9-c0e14897e487] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.00417052s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-733305 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-662970 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a5b83340-cf08-4421-a25e-b29f05dc0a71] Pending
helpers_test.go:344: "busybox" [a5b83340-cf08-4421-a25e-b29f05dc0a71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a5b83340-cf08-4421-a25e-b29f05dc0a71] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00366686s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-662970 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-733305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-733305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016358592s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-733305 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-733305 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-733305 --alsologtostderr -v=3: (1m30.863993899s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-662970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-662970 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-662970 --alsologtostderr -v=3
E0630 15:45:20.916973 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-662970 --alsologtostderr -v=3: (1m31.047778778s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-800301 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2
E0630 15:45:47.758604 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:46:04.684371 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-800301 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2: (59.462089482s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-733305 -n no-preload-733305
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-733305 -n no-preload-733305: exit status 7 (78.90326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-733305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (64.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-733305 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-733305 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2: (1m4.091550918s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-733305 -n no-preload-733305
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (64.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-800301 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dd6630ca-97f2-44ff-8d44-53b8d5f81e04] Pending
helpers_test.go:344: "busybox" [dd6630ca-97f2-44ff-8d44-53b8d5f81e04] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dd6630ca-97f2-44ff-8d44-53b8d5f81e04] Running
E0630 15:46:43.992335 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004266336s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-800301 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-662970 -n embed-certs-662970
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-662970 -n embed-certs-662970: exit status 7 (77.742053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-662970 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (67.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-662970 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-662970 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2: (1m6.80956138s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-662970 -n embed-certs-662970
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (67.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-800301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-800301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040350492s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-800301 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-800301 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-800301 --alsologtostderr -v=3: (1m31.445944364s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xjgxz" [6aa38a62-e8ad-4063-a9d0-041b1679a58a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xjgxz" [6aa38a62-e8ad-4063-a9d0-041b1679a58a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.417130966s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4r2w2" [031b2bc9-0aa3-4561-96aa-08e2607cd565] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4r2w2" [031b2bc9-0aa3-4561-96aa-08e2607cd565] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004874397s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xjgxz" [6aa38a62-e8ad-4063-a9d0-041b1679a58a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004968323s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-733305 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-733305 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-733305 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-733305 -n no-preload-733305
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-733305 -n no-preload-733305: exit status 2 (262.350378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-733305 -n no-preload-733305
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-733305 -n no-preload-733305: exit status 2 (261.496573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-733305 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-733305 -n no-preload-733305
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-733305 -n no-preload-733305
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4r2w2" [031b2bc9-0aa3-4561-96aa-08e2607cd565] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004334351s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-662970 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-208177 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-208177 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2: (51.128364734s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-662970 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-662970 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-662970 -n embed-certs-662970
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-662970 -n embed-certs-662970: exit status 2 (298.798433ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-662970 -n embed-certs-662970
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-662970 -n embed-certs-662970: exit status 2 (302.591772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-662970 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-662970 -n embed-certs-662970
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-662970 -n embed-certs-662970
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (76.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m16.77161306s)
--- PASS: TestNetworkPlugins/group/auto/Start (76.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-800301 -n default-k8s-diff-port-800301
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-800301 -n default-k8s-diff-port-800301: exit status 7 (79.937104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-800301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (81.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-800301 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-800301 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2: (1m21.392283182s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-800301 -n default-k8s-diff-port-800301
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (81.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-208177 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-208177 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.139295821s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-208177 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-208177 --alsologtostderr -v=3: (11.377423827s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-208177 -n newest-cni-208177
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-208177 -n newest-cni-208177: exit status 7 (94.139375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-208177 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (54.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-208177 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-208177 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.33.2: (54.419017538s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-208177 -n newest-cni-208177
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (54.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-836310 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-836310 --alsologtostderr -v=3: (2.34147652s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-836310 -n old-k8s-version-836310: exit status 7 (86.59652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-836310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-668101 "pgrep -a kubelet"
I0630 15:49:21.098163 1557732 config.go:182] Loaded profile config "auto-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-668101 replace --force -f testdata/netcat-deployment.yaml
I0630 15:49:21.478577 1557732 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wwvwr" [a0a46418-57fa-4020-ac24-969187ec51be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wwvwr" [a0a46418-57fa-4020-ac24-969187ec51be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003839006s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-668101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nhbtl" [931a6335-73dc-4ad4-865d-e0bb3b1d70af] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nhbtl" [931a6335-73dc-4ad4-865d-e0bb3b1d70af] Running
E0630 15:49:48.660312 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:49:48.666796 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:49:48.678373 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:49:48.700041 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:49:48.744058 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:49:48.825689 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:49:48.988512 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:49:49.310938 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:49:49.952408 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005026123s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m12.699925278s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nhbtl" [931a6335-73dc-4ad4-865d-e0bb3b1d70af] Running
E0630 15:49:53.796027 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005234392s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-800301 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-208177 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-208177 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-208177 -n newest-cni-208177
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-208177 -n newest-cni-208177: exit status 2 (266.122731ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-208177 -n newest-cni-208177
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-208177 -n newest-cni-208177: exit status 2 (269.297762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-208177 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-208177 -n newest-cni-208177
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-208177 -n newest-cni-208177
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-800301 image list --format=json
E0630 15:49:58.918230 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-800301 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-800301 -n default-k8s-diff-port-800301
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-800301 -n default-k8s-diff-port-800301: exit status 2 (340.890927ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-800301 -n default-k8s-diff-port-800301
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-800301 -n default-k8s-diff-port-800301: exit status 2 (290.793627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-800301 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-800301 -n default-k8s-diff-port-800301
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-800301 -n default-k8s-diff-port-800301
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (108.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m48.110407595s)
--- PASS: TestNetworkPlugins/group/calico/Start (108.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (127.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0630 15:50:09.160407 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:50:20.917055 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:50:29.642384 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:51:04.684172 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m7.182499545s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (127.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dwflh" [447fbc24-aebc-488c-8cc8-5ebfc7d483a4] Running
E0630 15:51:10.604319 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004350123s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-668101 "pgrep -a kubelet"
I0630 15:51:11.119836 1557732 config.go:182] Loaded profile config "kindnet-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-668101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hcr6n" [1e6dc433-b164-4a5f-a998-5bc95fcbd121] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hcr6n" [1e6dc433-b164-4a5f-a998-5bc95fcbd121] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004877899s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-668101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0630 15:51:45.069325 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m4.344020256s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qtjz6" [b74c0158-0d55-47ad-a983-7fcaddd42693] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-qtjz6" [b74c0158-0d55-47ad-a983-7fcaddd42693] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.010027733s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-668101 "pgrep -a kubelet"
I0630 15:51:54.200215 1557732 config.go:182] Loaded profile config "calico-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-668101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l6zvm" [04839189-08a7-48e9-8dd8-0ebeae1b9b42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0630 15:51:55.311175 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "netcat-5d86dc444-l6zvm" [04839189-08a7-48e9-8dd8-0ebeae1b9b42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005648152s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-668101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-668101 "pgrep -a kubelet"
I0630 15:52:11.630680 1557732 config.go:182] Loaded profile config "custom-flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-668101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gnlmw" [23fd4e67-7ff5-4b86-b8c8-f6131c13201e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0630 15:52:15.793313 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "netcat-5d86dc444-gnlmw" [23fd4e67-7ff5-4b86-b8c8-f6131c13201e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.009114502s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-668101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m23.45518492s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-668101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m16.673736678s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-668101 "pgrep -a kubelet"
I0630 15:52:46.557552 1557732 config.go:182] Loaded profile config "enable-default-cni-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-668101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qkn6b" [429aef63-2828-4a4b-a1e6-db33fb93a06c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-qkn6b" [429aef63-2828-4a4b-a1e6-db33fb93a06c] Running
E0630 15:52:56.754926 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004287665s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-668101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9f8wv" [b8f3028d-d7b7-4572-9cf2-575a870ef048] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004346975s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-668101 "pgrep -a kubelet"
I0630 15:53:55.011902 1557732 config.go:182] Loaded profile config "flannel-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-668101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dbcmv" [a0c8d2bf-b367-4271-b373-eafe517206b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dbcmv" [a0c8d2bf-b367-4271-b373-eafe517206b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005342773s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-668101 "pgrep -a kubelet"
I0630 15:53:59.793283 1557732 config.go:182] Loaded profile config "bridge-668101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-668101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6mwp8" [bfcf6be7-d6c8-4510-b906-36a51ef6a56a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6mwp8" [bfcf6be7-d6c8-4510-b906-36a51ef6a56a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004738757s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-668101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-668101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-668101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0630 15:54:31.729637 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:54:41.971917 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:54:48.661107 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:55:02.454287 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:55:16.368804 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/no-preload-733305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:55:20.917296 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/addons-301682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:55:43.416436 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:04.686179 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/functional-920930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:04.878232 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:04.884779 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:04.896352 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:04.917960 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:04.959493 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:05.041510 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:05.203211 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:05.525183 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:06.167470 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:07.449481 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:10.011251 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:15.133378 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:25.375606 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:34.815318 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:45.857152 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:47.888802 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:47.895247 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:47.906660 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:47.928187 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:47.969678 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:48.051238 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:48.212892 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:48.534677 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:49.176252 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:50.458262 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:53.019986 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:56:58.141552 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:02.518365 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/default-k8s-diff-port-800301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:05.338503 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/auto-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:08.383335 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:11.941646 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:11.948198 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:11.959687 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:11.981213 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:12.022793 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:12.104354 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:12.265984 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:12.587774 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:13.229942 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:14.511802 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:17.073924 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:22.195789 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:26.818830 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/kindnet-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:28.865019 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:32.437464 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:46.780440 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:46.786871 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:46.798355 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:46.819872 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:46.861449 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:46.943042 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:47.104714 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:47.426502 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:48.068871 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:49.351000 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:51.912511 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:52.919221 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/custom-flannel-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:57:57.034885 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:07.277346 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/enable-default-cni-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:58:09.827314 1557732 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1550299/.minikube/profiles/calico-668101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test skip (40/322)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.33.2/cached-images 0
15 TestDownloadOnly/v1.33.2/binaries 0
16 TestDownloadOnly/v1.33.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.33
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
270 TestStartStop/group/disable-driver-mounts 0.16
278 TestNetworkPlugins/group/kubenet 3.31
286 TestNetworkPlugins/group/cilium 3.9
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.33.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.33.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.33.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301682 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-643169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-643169
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-668101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-668101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-668101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-668101"

                                                
                                                
----------------------- debugLogs end: kubenet-668101 [took: 3.148256413s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-668101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-668101
--- SKIP: TestNetworkPlugins/group/kubenet (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-668101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-668101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-668101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-668101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668101"

                                                
                                                
----------------------- debugLogs end: cilium-668101 [took: 3.736639371s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-668101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-668101
--- SKIP: TestNetworkPlugins/group/cilium (3.90s)

                                                
                                    
Copied to clipboard